opencv/samples/dnn
2018-07-12 11:06:53 +02:00
..
face_detector Merge pull request #11631 from sparkecho:3.4 2018-05-31 07:23:19 +00:00
classification.cpp Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
classification.py Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
CMakeLists.txt Update links to OpenCV's face detection network 2018-04-02 13:02:56 +03:00
colorization.cpp Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
colorization.py Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
custom_layers.hpp EAST: An Efficient and Accurate Scene Text Detector (https://arxiv.org/abs/1704.03155v2) 2018-05-11 14:55:42 +03:00
edge_detection.py Custom deep learning layers in Python 2018-04-26 09:25:18 +03:00
fast_neural_style.py Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
js_face_recognition.html documentation: avoid links to 'master' branch from 3.4 maintenance branch (2) 2018-05-31 19:30:56 +03:00
mobilenet_ssd_accuracy.py Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
object_detection.cpp dnn: fix typo in object_detection.cpp sample 2018-07-12 11:06:53 +02:00
object_detection.py Set output layers names and types for models in DLDT's intermediate representation 2018-06-28 10:21:45 +03:00
openpose.cpp select the device (video capture) 2018-05-09 17:20:02 +03:00
openpose.py Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
README.md documentation: avoid links to 'master' branch from 3.4 maintenance branch (2) 2018-05-31 19:30:56 +03:00
segmentation.cpp Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
segmentation.py Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
shrink_tf_graph_weights.py Text TensorFlow graphs parsing. MobileNet-SSD for 90 classes. 2017-10-08 22:25:29 +03:00
text_detection.cpp Added ResizeBilinear op for tf (#11050) 2018-06-07 16:29:04 +03:00
tf_text_graph_faster_rcnn.py Faster-RCNN object detection models from TensorFlow 2018-05-30 17:12:36 +03:00
tf_text_graph_ssd.py Enable SSD models from TensorFlow with OpenCL plugin of Intel's Inference Engine 2018-06-08 16:55:21 +03:00

OpenCV deep learning module samples

Model Zoo

Object detection

Model Scale Size WxH Mean subtraction Channels order
MobileNet-SSD, Caffe 0.00784 (2/255) 300x300 127.5 127.5 127.5 BGR
OpenCV face detector 1.0 300x300 104 177 123 BGR
SSDs from TensorFlow 0.00784 (2/255) 300x300 127.5 127.5 127.5 RGB
YOLO 0.00392 (1/255) 416x416 0 0 0 RGB
VGG16-SSD 1.0 300x300 104 117 123 BGR
Faster-RCNN 1.0 800x600 102.9801 115.9465 122.7717 BGR
R-FCN 1.0 800x600 102.9801 115.9465 122.7717 BGR
Faster-RCNN, ResNet backbone 1.0 300x300 103.939 116.779 123.68 RGB
Faster-RCNN, InceptionV2 backbone 0.00784 (2/255) 300x300 127.5 127.5 127.5 RGB

Face detection

An origin model with single precision floating point weights has been quantized using TensorFlow framework. To achieve the best accuracy run the model on BGR images resized to 300x300 applying mean subtraction of values (104, 177, 123) for each blue, green and red channels correspondingly.

The following are accuracy metrics obtained using COCO object detection evaluation tool on FDDB dataset (see script) applying resize to 300x300 and keeping an origin images' sizes.

AP - Average Precision                            | FP32/FP16 | UINT8          | FP32/FP16 | UINT8          |
AR - Average Recall                               | 300x300   | 300x300        | any size  | any size       |
--------------------------------------------------|-----------|----------------|-----------|----------------|
AP @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] | 0.408     | 0.408          | 0.378     | 0.328 (-0.050) |
AP @[ IoU=0.50      | area=   all | maxDets=100 ] | 0.849     | 0.849          | 0.797     | 0.790 (-0.007) |
AP @[ IoU=0.75      | area=   all | maxDets=100 ] | 0.251     | 0.251          | 0.208     | 0.140 (-0.068) |
AP @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.050     | 0.051 (+0.001) | 0.107     | 0.070 (-0.037) |
AP @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.381     | 0.379 (-0.002) | 0.380     | 0.368 (-0.012) |
AP @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.455     | 0.455          | 0.412     | 0.337 (-0.075) |
AR @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] | 0.299     | 0.299          | 0.279     | 0.246 (-0.033) |
AR @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] | 0.482     | 0.482          | 0.476     | 0.436 (-0.040) |
AR @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] | 0.496     | 0.496          | 0.491     | 0.451 (-0.040) |
AR @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.189     | 0.193 (+0.004) | 0.284     | 0.232 (-0.052) |
AR @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.481     | 0.480 (-0.001) | 0.470     | 0.458 (-0.012) |
AR @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.528     | 0.528          | 0.520     | 0.462 (-0.058) |

Classification

Model Scale Size WxH Mean subtraction Channels order
GoogLeNet 1.0 224x224 104 117 123 BGR
SqueezeNet 1.0 227x227 0 0 0 BGR

Semantic segmentation

Model Scale Size WxH Mean subtraction Channels order
ENet 0.00392 (1/255) 1024x512 0 0 0 RGB
FCN8s 1.0 500x500 0 0 0 BGR

References