Documentation for Yolo usage in Opencv #24898 This PR introduces documentation for the usage of yolo detection model family in open CV. This is not to be merge before #24691, as the sample will need to be changed. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake
11 KiB
YOLO DNNs
@tableofcontents
@prev_tutorial{tutorial_dnn_openvino} @next_tutorial{tutorial_dnn_javascript}
Original author | Alessandro de Oliveira Faria |
Extended by | Abduragim Shtanchaev |
Compatibility | OpenCV >= 4.9.0 |
Running pre-trained YOLO model in OpenCV
Deploying pre-trained models is a common task in machine learning, particularly when working with hardware that does not support certain frameworks like PyTorch. This guide provides a comprehensive overview of exporting pre-trained YOLO family models from PyTorch and deploying them using OpenCV's DNN framework. For demonstration purposes, we will focus on the YOLOX model, but the methodology applies to other supported models.
@note Currently, OpenCV supports the following YOLO models:
This support includes pre and post-processing routines specific to these models. While other older version of YOLO are also supported by OpenCV in Darknet format, they are out of the scope of this tutorial.
Assuming that we have successfully trained YOLOX model, the subsequent step involves exporting and running this model with OpenCV. There are several critical considerations to address before proceeding with this process. Let's delve into these aspects.
YOLO's Pre-proccessing & Output
Understanding the nature of inputs and outputs associated with YOLO family detectors is pivotal. These detectors, akin to most Deep Neural Networks (DNN), typically exhibit variation in input sizes contingent upon the model's scale.
Model Scale | Input Size |
---|---|
Small Models 1 | 416x416 |
Midsize Models 2 | 640x640 |
Large Models 3 | 1280x1280 |
This table provides a quick reference to understand the different input dimensions commonly used in various YOLO models inputs. These are standard input shapes. Make sure you use input size that you trained model with, if it is differed from from the size mentioned in the table.
The next critical element in the process involves understanding the specifics of image pre-processing
for YOLO detectors. While the fundamental pre-processing approach remains consistent across the YOLO
family, there are subtle yet crucial differences that must be accounted for to avoid any degradation
in performance. Key among these are the resize type
and the padding value
applied post-resize.
For instance, the YOLOX model
utilizes a LetterBox
resize method and a padding value of 114.0
. It is imperative to ensure that
these parameters, along with the normalization constants, are appropriately matched to the model being
exported.
Regarding the model's output, it typically takes the form of a tensor with dimensions [BxNxC+5] or [BxNxC+4], where 'B' represents the batch size, 'N' denotes the number of anchors, and 'C' signifies the number of classes (for instance, 80 classes if the model is trained on the COCO dataset). The additional 5 in the former tensor structure corresponds to the objectness score (obj), confidence score (conf), and the bounding box coordinates (cx, cy, w, h). Notably, the YOLOv8 model's output is shaped as [BxNxC+4], where there is no explicit objectness score, and the object score is directly inferred from the class score. For the YOLOX model, specifically, it is also necessary to incorporate anchor points to rescale predictions back to the image domain. This step will be integrated into the ONNX graph, a process that we will detail further in the subsequent sections.
PyTorch Model Export
Now that we know know the parameters of the pre-precessing we can go on and export the model from Pytorch to ONNX graph. Since in this tutorial we are using YOLOX as our sample model, lets use its export for demonstration purposes (the process is identical for the rest of the YOLO detectors). To exporting YOLOX we can just use export script. Particularly we need following commands:
@code{.bash} git clone https://github.com/Megvii-BaseDetection/YOLOX.git cd YOLOX wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth # download pre-trained weights python3 -m tools.export_onnx --output-name yolox_s.onnx -n yolox-s -c yolox_s.pth --decode_in_inference @endcode
NOTE: Here --decode_in_inference
is to include anchor box creation in the ONNX graph itself.
It sets this value
to True
, which subsequently includes anchor generation function.
Below we demonstrated the minimal version of the export script (which could be used for models other than YOLOX) in case it is needed. However, usually each YOLO repository has predefined export script.
@code{.py} import onnx import torch from onnxsim import simplify
# load the model state dict
ckpt = torch.load(ckpt_file, map_location="cpu")
model.load_state_dict(ckpt)
# prepare dummy input
dummy_input = torch.randn(args.batch_size, 3, exp.test_size[0], exp.test_size[1])
#export the model
torch.onnx._export(
model,
dummy_input,
"yolox.onnx",
input_names=["input"],
output_names=["output"],
dynamic_axes={"input": {0: 'batch'},
"output": {0: 'batch'}})
# use onnx-simplifier to reduce reduent model.
onnx_model = onnx.load(args.output_name)
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, args.output_name)
@endcode
Running Yolo ONNX detector with OpenCV Sample
Once we have our ONNX graph of the model, we just simply can run with OpenCV's sample. To that we need to make sure:
- OpenCV is build with -DBUILD_EXAMLES=ON flag.
- Navigate to the OpenCV's
build
directory - Run the following command:
@code{.cpp}
./bin/example_dnn_yolo_detector --input=<path_to_your_input_file>
--classes=<path_to_class_names_file>
--thr=<confidence_threshold>
--nms=<non_maximum_suppression_threshold>
--mean=<mean_normalization_value>
--scale=<scale_factor>
--yolo=<yolo_model_version>
--padvalue=<padding_value>
--paddingmode=<padding_mode>
--backend=<computation_backend>
--target=<target_computation_device>
@endcode
VIDEO DEMO: @youtube{NHtRlndE2cg}
- --input: File path to your input image or video. If omitted, it will capture frames from a camera.
- --classes: File path to a text file containing class names for object detection.
- --thr: Confidence threshold for detection (e.g., 0.5).
- --nms: Non-maximum suppression threshold (e.g., 0.4).
- --mean: Mean normalization value (e.g., 0.0 for no mean normalization).
- --scale: Scale factor for input normalization (e.g., 1.0).
- --yolo: YOLO model version (e.g., YOLOv3, YOLOv4, etc.).
- --padvalue: Padding value used in pre-processing (e.g., 114.0).
- --paddingmode: Method for handling image resizing and padding. Options: 0 (resize without extra processing), 1 (crop after resize), 2 (resize with aspect ratio preservation).
- --backend: Selection of computation backend (0 for automatic, 1 for Halide, 2 for OpenVINO, etc.).
- --target: Selection of target computation device (0 for CPU, 1 for OpenCL, etc.).
- --device: Camera device number (0 for default camera). If
--input
is not provided camera with index 0 will used by default.
Here mean
, scale
, padvalue
, paddingmode
should exactly match those that we discussed
in pre-processing section in order for the model to match result in PyTorch
To demonstrate how to run OpenCV YOLO samples without your own pretrained model, follow these instructions:
- Ensure Python is installed on your platform.
- Confirm that OpenCV is built with the
-DBUILD_EXAMPLES=ON
flag.
Run the YOLOX detector(with default values):
@code{.sh} git clone https://github.com/opencv/opencv_extra.git cd opencv_extra/testdata/dnn python download_models.py yolox_s_inf_decoder cd .. export OPENCV_TEST_DATA_PATH=$(pwd) cd ./bin/example_dnn_yolo_detector @endcode
This will execute the YOLOX detector with your camera. For YOLOv8 (for instance), follow these additional steps:
@code{.sh} cd opencv_extra/testdata/dnn python download_models.py yolov8 cd .. export OPENCV_TEST_DATA_PATH=$(pwd) cd
./bin/example_dnn_yolo_detector --model=onnx/models/yolov8n.onnx --yolo=yolov8 --mean=0.0 --scale=0.003921568627 --paddingmode=2 --padvalue=144.0 --thr=0.5 --nms=0.4 --rgb=0 @endcode
Building a Custom Pipeline
Sometimes there is a need to make some custom adjustments in the inference pipeline. With OpenCV DNN module this is also quite easy to achieve. Below we will outline the sample implementation details:
- Import required libraries
@snippet samples/dnn/yolo_detector.cpp includes
- Read ONNX graph and create neural network model:
@snippet samples/dnn/yolo_detector.cpp read_net
- Read image and pre-process it:
@snippet samples/dnn/yolo_detector.cpp preprocess_params @snippet samples/dnn/yolo_detector.cpp preprocess_call @snippet samples/dnn/yolo_detector.cpp preprocess_call_func
- Inference:
@snippet samples/dnn/yolo_detector.cpp forward_buffers @snippet samples/dnn/yolo_detector.cpp forward
- Post-Processing
All post-processing steps are implemented in function yoloPostProcess
. Please pay attention,
that NMS step is not included into onnx graph. Sample uses OpenCV function for it.
@snippet samples/dnn/yolo_detector.cpp postprocess
- Draw predicted boxes
@snippet samples/dnn/yolo_detector.cpp draw_boxes