a8d1373919
Add sample support of YOLOv9 and YOLOv10 in OpenCV #25794 This PR adds sample support of [`YOLOv9`](https://github.com/WongKinYiu/yolov9) and [`YOLOv10`](https://github.com/THU-MIG/yolov10/tree/main)) in OpenCV. Models for this test are located in this [PR](https://github.com/opencv/opencv_extra/pull/1186). **Running YOLOv10 using OpenCV.** 1. In oder to run `YOLOv10` one needs to cut off postporcessing with dynamic shapes from torch and then convert it to ONNX. If someone is looking for ready solution, there is [this forked branch](https://github.com/Abdurrahheem/yolov10/tree/ash/opencv-export) from official YOLOv10. Particularty follow this proceduce. ```bash git clone git@github.com:Abdurrahheem/yolov10.git conda create -n yolov10 python=3.9 conda activate yolov10 pip install -r requirements.txt python export_opencv.py --model=<model-name> --imgsz=<input-img-size> ``` By default `model="yolov10s"` and `imgsz=(480,640)`. This will generate file `yolov10s.onnx`, which can be use for inference in OpenCV 2. For inference part on OpenCV. one can use `yolo_detector.cpp` [sample](https://github.com/opencv/opencv/blob/4.x/samples/dnn/yolo_detector.cpp). If you have followed above exporting procedure, then you can use following command to run the model. ``` bash build opencv from source cd build ./bin/example_dnn_yolo_detector --model=<path-to-yolov10s.onnx-file> --yolo=yolov10 --width=640 --height=480 --input=<path-to-image> --scale=0.003921568627 --padvalue=114 ``` If you do not specify `--input` argument, OpenCV will grab first camera that is avaliable on your platform. For more deatils on how to run the `yolo_detector.cpp` file see this [guide](https://docs.opencv.org/4.x/da/d9d/tutorial_dnn_yolo.html#autotoc_md443) **Running YOLOv9 using OpenCV** 1. Export model following [official guide](https://github.com/WongKinYiu/yolov9)of the YOLOv9 repository. Particularly you can do following for converting. ```bash git clone https://github.com/WongKinYiu/yolov9.git cd yolov9 conda create -n yolov9 python=3.9 conda activate yolov9 pip install -r requirements.txt wget https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-t-converted.pt python export.py --weights=./yolov9-t-converted.pt --include=onnx --img-size=(480,640) ``` This will generate <yolov9-t-converted.onnx> file. 2. Inference on OpenCV. ```bash build opencv from source cd build ./bin/example_dnn_yolo_detector --model=<path-to-yolov9-t-converted.onnx> --yolo=yolov9 --width=640 --height=480 --scale=0.003921568627 --padvalue=114 --path=<path-to-image> ``` ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake |
||
---|---|---|
.. | ||
dnn_model_runner/dnn_conversion | ||
face_detector | ||
results | ||
.gitignore | ||
action_recognition.py | ||
classification.cpp | ||
classification.py | ||
CMakeLists.txt | ||
colorization.cpp | ||
colorization.py | ||
common.hpp | ||
common.py | ||
custom_layers.hpp | ||
dasiamrpn_tracker.cpp | ||
download_models.py | ||
edge_detection.py | ||
face_detect.cpp | ||
face_detect.py | ||
fast_neural_style.py | ||
human_parsing.cpp | ||
human_parsing.py | ||
js_face_recognition.html | ||
mask_rcnn.py | ||
mobilenet_ssd_accuracy.py | ||
models.yml | ||
nanotrack_tracker.cpp | ||
object_detection.cpp | ||
object_detection.py | ||
openpose.cpp | ||
openpose.py | ||
optical_flow.py | ||
person_reid.cpp | ||
person_reid.py | ||
README.md | ||
scene_text_detection.cpp | ||
scene_text_recognition.cpp | ||
scene_text_spotting.cpp | ||
segmentation.cpp | ||
segmentation.py | ||
shrink_tf_graph_weights.py | ||
siamrpnpp.py | ||
speech_recognition.cpp | ||
speech_recognition.py | ||
text_detection.cpp | ||
text_detection.py | ||
tf_text_graph_common.py | ||
tf_text_graph_efficientdet.py | ||
tf_text_graph_faster_rcnn.py | ||
tf_text_graph_mask_rcnn.py | ||
tf_text_graph_ssd.py | ||
virtual_try_on.py | ||
vit_tracker.cpp | ||
yolo_detector.cpp |
OpenCV deep learning module samples
Model Zoo
Check a wiki for a list of tested models.
If OpenCV is built with Intel's Inference Engine support you can use Intel's pre-trained models.
There are different preprocessing parameters such mean subtraction or scale factors for different models. You may check the most popular models and their parameters at models.yml configuration file. It might be also used for aliasing samples parameters. In example,
python object_detection.py opencv_fd --model /path/to/caffemodel --config /path/to/prototxt
Check -h
option to know which values are used by default:
python object_detection.py opencv_fd -h
Sample models
You can download sample models using download_models.py
. For example, the following command will download network weights for OpenCV Face Detector model and store them in FaceDetector folder:
python download_models.py --save_dir FaceDetector opencv_fd
You can use default configuration files adopted for OpenCV from here.
You also can use the script to download necessary files from your code. Assume you have the following code inside your_script.py
:
from download_models import downloadFile
filepath1 = downloadFile("https://drive.google.com/uc?export=download&id=0B3gersZ2cHIxRm5PMWRoTkdHdHc", None, filename="MobileNetSSD_deploy.caffemodel", save_dir="save_dir_1")
filepath2 = downloadFile("https://drive.google.com/uc?export=download&id=0B3gersZ2cHIxRm5PMWRoTkdHdHc", "994d30a8afaa9e754d17d2373b2d62a7dfbaaf7a", filename="MobileNetSSD_deploy.caffemodel")
print(filepath1)
print(filepath2)
# Your code
By running the following commands, you will get MobileNetSSD_deploy.caffemodel file:
export OPENCV_DOWNLOAD_DATA_PATH=download_folder
python your_script.py
Note that you can provide a directory using save_dir parameter or via OPENCV_SAVE_DIR environment variable.
Face detection
An origin model
with single precision floating point weights has been quantized using TensorFlow framework.
To achieve the best accuracy run the model on BGR images resized to 300x300
applying mean subtraction
of values (104, 177, 123)
for each blue, green and red channels correspondingly.
The following are accuracy metrics obtained using COCO object detection evaluation
tool on FDDB dataset
(see script)
applying resize to 300x300
and keeping an origin images' sizes.
AP - Average Precision | FP32/FP16 | UINT8 | FP32/FP16 | UINT8 |
AR - Average Recall | 300x300 | 300x300 | any size | any size |
--------------------------------------------------|-----------|----------------|-----------|----------------|
AP @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.408 | 0.408 | 0.378 | 0.328 (-0.050) |
AP @[ IoU=0.50 | area= all | maxDets=100 ] | 0.849 | 0.849 | 0.797 | 0.790 (-0.007) |
AP @[ IoU=0.75 | area= all | maxDets=100 ] | 0.251 | 0.251 | 0.208 | 0.140 (-0.068) |
AP @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.050 | 0.051 (+0.001) | 0.107 | 0.070 (-0.037) |
AP @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.381 | 0.379 (-0.002) | 0.380 | 0.368 (-0.012) |
AP @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.455 | 0.455 | 0.412 | 0.337 (-0.075) |
AR @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] | 0.299 | 0.299 | 0.279 | 0.246 (-0.033) |
AR @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] | 0.482 | 0.482 | 0.476 | 0.436 (-0.040) |
AR @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.496 | 0.496 | 0.491 | 0.451 (-0.040) |
AR @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.189 | 0.193 (+0.004) | 0.284 | 0.232 (-0.052) |
AR @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.481 | 0.480 (-0.001) | 0.470 | 0.458 (-0.012) |
AR @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.528 | 0.528 | 0.520 | 0.462 (-0.058) |