![]() VIT track(gsoc realtime object tracking model) #24201 Vit tracker(vision transformer tracker) is a much better model for real-time object tracking. Vit tracker can achieve speeds exceeding nanotrack by 20% in single-threaded mode with ARM chip, and the advantage becomes even more pronounced in multi-threaded mode. In addition, on the dataset, vit tracker demonstrates better performance compared to nanotrack. Moreover, vit trackerprovides confidence values during the tracking process, which can be used to determine if the tracking is currently lost. opencv_zoo: https://github.com/opencv/opencv_zoo/pull/194 opencv_extra: [https://github.com/opencv/opencv_extra/pull/1088](https://github.com/opencv/opencv_extra/pull/1088) # Performance comparison is as follows: NOTE: The speed below is tested by **onnxruntime** because opencv has poor support for the transformer architecture for now. ONNX speed test on ARM platform(apple M2)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack| 5.25| 4.86| 4.72| 4.49| | vit tracker| 4.18| 2.41| 1.97| **1.46 (3X)**| ONNX speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | nanotrack|3.20|2.75|2.46|2.55| | vit tracker|3.84|2.37|2.10|2.01| opencv speed test on x86 platform(intel i3 10105)(ms): | thread nums | 1| 2| 3| 4| |--------|--------|--------|--------|--------| | vit tracker|31.3|31.4|31.4|31.4| preformance test on lasot dataset(AUC is the most important data. Higher AUC means better tracker): |LASOT | AUC| P| Pnorm| |--------|--------|--------|--------| | nanotrack| 46.8| 45.0| 43.3| | vit tracker| 48.6| 44.8| 54.7| [https://youtu.be/MJiPnu1ZQRI](https://youtu.be/MJiPnu1ZQRI) In target tracking tasks, the score is an important indicator that can indicate whether the current target is lost. In the video, vit tracker can track the target and display the current score in the upper left corner of the video. When the target is lost, the score drops significantly. While nanotrack will only return 0.9 score in any situation, so that we cannot determine whether the target is lost. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [ ] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake |
||
---|---|---|
.. | ||
dnn_model_runner/dnn_conversion | ||
face_detector | ||
results | ||
.gitignore | ||
action_recognition.py | ||
classification.cpp | ||
classification.py | ||
CMakeLists.txt | ||
colorization.cpp | ||
colorization.py | ||
common.hpp | ||
common.py | ||
custom_layers.hpp | ||
dasiamrpn_tracker.cpp | ||
download_models.py | ||
edge_detection.py | ||
face_detect.cpp | ||
face_detect.py | ||
fast_neural_style.py | ||
human_parsing.cpp | ||
human_parsing.py | ||
js_face_recognition.html | ||
mask_rcnn.py | ||
mobilenet_ssd_accuracy.py | ||
models.yml | ||
nanotrack_tracker.cpp | ||
object_detection.cpp | ||
object_detection.py | ||
openpose.cpp | ||
openpose.py | ||
optical_flow.py | ||
person_reid.cpp | ||
person_reid.py | ||
README.md | ||
scene_text_detection.cpp | ||
scene_text_recognition.cpp | ||
scene_text_spotting.cpp | ||
segmentation.cpp | ||
segmentation.py | ||
shrink_tf_graph_weights.py | ||
siamrpnpp.py | ||
speech_recognition.cpp | ||
speech_recognition.py | ||
text_detection.cpp | ||
text_detection.py | ||
tf_text_graph_common.py | ||
tf_text_graph_efficientdet.py | ||
tf_text_graph_faster_rcnn.py | ||
tf_text_graph_mask_rcnn.py | ||
tf_text_graph_ssd.py | ||
virtual_try_on.py | ||
vit_tracker.cpp |
OpenCV deep learning module samples
Model Zoo
Check a wiki for a list of tested models.
If OpenCV is built with Intel's Inference Engine support you can use Intel's pre-trained models.
There are different preprocessing parameters such mean subtraction or scale factors for different models. You may check the most popular models and their parameters at models.yml configuration file. It might be also used for aliasing samples parameters. In example,
python object_detection.py opencv_fd --model /path/to/caffemodel --config /path/to/prototxt
Check -h
option to know which values are used by default:
python object_detection.py opencv_fd -h
Sample models
You can download sample models using download_models.py
. For example, the following command will download network weights for OpenCV Face Detector model and store them in FaceDetector folder:
python download_models.py --save_dir FaceDetector opencv_fd
You can use default configuration files adopted for OpenCV from here.
You also can use the script to download necessary files from your code. Assume you have the following code inside your_script.py
:
from download_models import downloadFile
filepath1 = downloadFile("https://drive.google.com/uc?export=download&id=0B3gersZ2cHIxRm5PMWRoTkdHdHc", None, filename="MobileNetSSD_deploy.caffemodel", save_dir="save_dir_1")
filepath2 = downloadFile("https://drive.google.com/uc?export=download&id=0B3gersZ2cHIxRm5PMWRoTkdHdHc", "994d30a8afaa9e754d17d2373b2d62a7dfbaaf7a", filename="MobileNetSSD_deploy.caffemodel")
print(filepath1)
print(filepath2)
# Your code
By running the following commands, you will get MobileNetSSD_deploy.caffemodel file:
export OPENCV_DOWNLOAD_DATA_PATH=download_folder
python your_script.py
Note that you can provide a directory using save_dir parameter or via OPENCV_SAVE_DIR environment variable.
Face detection
An origin model
with single precision floating point weights has been quantized using TensorFlow framework.
To achieve the best accuracy run the model on BGR images resized to 300x300
applying mean subtraction
of values (104, 177, 123)
for each blue, green and red channels correspondingly.
The following are accuracy metrics obtained using COCO object detection evaluation
tool on FDDB dataset
(see script)
applying resize to 300x300
and keeping an origin images' sizes.
AP - Average Precision | FP32/FP16 | UINT8 | FP32/FP16 | UINT8 |
AR - Average Recall | 300x300 | 300x300 | any size | any size |
--------------------------------------------------|-----------|----------------|-----------|----------------|
AP @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.408 | 0.408 | 0.378 | 0.328 (-0.050) |
AP @[ IoU=0.50 | area= all | maxDets=100 ] | 0.849 | 0.849 | 0.797 | 0.790 (-0.007) |
AP @[ IoU=0.75 | area= all | maxDets=100 ] | 0.251 | 0.251 | 0.208 | 0.140 (-0.068) |
AP @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.050 | 0.051 (+0.001) | 0.107 | 0.070 (-0.037) |
AP @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.381 | 0.379 (-0.002) | 0.380 | 0.368 (-0.012) |
AP @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.455 | 0.455 | 0.412 | 0.337 (-0.075) |
AR @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] | 0.299 | 0.299 | 0.279 | 0.246 (-0.033) |
AR @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] | 0.482 | 0.482 | 0.476 | 0.436 (-0.040) |
AR @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.496 | 0.496 | 0.491 | 0.451 (-0.040) |
AR @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.189 | 0.193 (+0.004) | 0.284 | 0.232 (-0.052) |
AR @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.481 | 0.480 (-0.001) | 0.470 | 0.458 (-0.012) |
AR @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.528 | 0.528 | 0.520 | 0.462 (-0.058) |