opencv/samples/dnn
Abduragim Shtanchaev 050085c996
Merge pull request #25950 from Abdurrahheem:ash/add-inpainting-sample
Diffusion Inpainting Sample #25950

This PR adds inpaiting sample that is based on [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/pdf/2112.10752) paper (reference github [repository](https://github.com/CompVis/latent-diffusion)).


Steps to run the model:

1. Firstly needs ONNX graph of the Latent Diffusion Model. You can get it in two different ways. 

> a. Generate the using this [repo](https://github.com/Abdurrahheem/latent-diffusion/tree/ash/export2onnx) and follow instructions below

```bash
git clone https://github.com/Abdurrahheem/latent-diffusion.git
cd latent-diffusion
conda env create -f environment.yaml
conda activate ldm
wget -O models/ldm/inpainting_big/last.ckpt https://heibox.uni-heidelberg.de/f/4d9ac7ea40c64582b7c9/?dl=1
python -m scripts.inpaint.py --indir data/inpainting_examples/ --outdir outputs/inpainting_results --export=True
```

> b. Download the ONNX graph (there 3 fiels) using this link: TODO make a link

2. Build opencv (preferebly with CUDA support enabled
3. Run the script 

```bash
cd opencv/samples/dnn
python ldm_inpainting.py 
python ldm_inpainting.py -e=<path-to-InpaintEncoder.onnx file> -d=<path-to-InpaintDecoder.onnx file> -df=<path-to-LatenDiffusion.onnx file> -i=<path-to-image>
```
Right after the last command you will be prompted with image. You can click on left mouse bottom and starting selection a region you would like to be inpainted (deleted). Once you finish marking the region, click on left mouse botton again and press esc button on your keyboard. The inpainting proccess will start. 

Note: If you are running it on CPU it might take a large chank of time. Also make sure to have about 15GB of RAM to make process faster (other wise swapping will click in and everything will be slower)
 
Current challenges: 

1. Diffusion process is slow (many layers fallback to CPU with running with CUDA backend) 
2. The diffusion result is does exactly mach that of the original torch pipeline

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x]The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
2024-08-21 14:48:37 +03:00
..
dnn_model_runner/dnn_conversion Merge pull request #20290 from wjj19950828:add_paddle_humanseg_demo 2021-09-27 21:59:09 +00:00
face_detector Merge pull request #18591 from sl-sergei:download_utilities 2020-12-11 10:15:32 +00:00
results Merge pull request #20422 from fengyuentau:dnn_face 2021-10-08 19:13:49 +00:00
.gitignore Merge pull request #18591 from sl-sergei:download_utilities 2020-12-11 10:15:32 +00:00
action_recognition.py Merge pull request #14627 from l-bat:demo_kinetics 2019-05-30 17:36:00 +03:00
classification.cpp Merge pull request #25519 from gursimarsingh:improved_classification_sample 2024-08-06 09:16:11 +03:00
classification.py Merge pull request #25519 from gursimarsingh:improved_classification_sample 2024-08-06 09:16:11 +03:00
CMakeLists.txt Merge pull request #20422 from fengyuentau:dnn_face 2021-10-08 19:13:49 +00:00
colorization.cpp Merge pull request #25433 from gursimarsingh:colorization_onnx_sample 2024-04-18 18:15:05 +03:00
colorization.py Merge pull request #25433 from gursimarsingh:colorization_onnx_sample 2024-04-18 18:15:05 +03:00
common.hpp Merge pull request #25519 from gursimarsingh:improved_classification_sample 2024-08-06 09:16:11 +03:00
common.py Merge pull request #25519 from gursimarsingh:improved_classification_sample 2024-08-06 09:16:11 +03:00
custom_layers.hpp Merge pull request #12264 from dkurt:dnn_remove_forward_method 2018-09-06 13:26:47 +03:00
dasiamrpn_tracker.cpp Merge pull request #24231 from fengyuentau:halide_cleanup_5.x 2023-10-13 16:53:18 +03:00
download_models.py Merge branch 4.x 2024-07-01 15:59:43 +03:00
edge_detection.py Fix edge_detection.py sample for Python 3 2019-01-09 15:28:10 +03:00
face_detect.cpp Update documentation 2022-01-10 18:34:39 +03:00
face_detect.py Update documentation 2022-01-10 18:34:39 +03:00
fast_neural_style.py changed readNetFromONNX to readNet 2023-09-08 18:36:13 +07:00
gpt2_inference.py Merge pull request #25868 from Abdurrahheem:ash/add-gpt2-sample 2024-07-18 16:47:12 +03:00
human_parsing.cpp Merge pull request #24231 from fengyuentau:halide_cleanup_5.x 2023-10-13 16:53:18 +03:00
human_parsing.py Merge pull request #20175 from rogday:dnn_samples_cuda 2021-06-01 14:00:51 +00:00
js_face_recognition.html change js_face_recognition sample with yunet 2024-04-22 15:59:54 +08:00
ldm_inpainting.py Merge pull request #25950 from Abdurrahheem:ash/add-inpainting-sample 2024-08-21 14:48:37 +03:00
mask_rcnn.py Merge pull request #17394 from huningxin:fix_segmentation_py 2020-05-27 11:20:07 +03:00
mobilenet_ssd_accuracy.py fix pylint warnings 2019-10-16 18:49:33 +03:00
models.yml Merge pull request #25519 from gursimarsingh:improved_classification_sample 2024-08-06 09:16:11 +03:00
nanotrack_tracker.cpp Merge pull request #24231 from fengyuentau:halide_cleanup_5.x 2023-10-13 16:53:18 +03:00
object_detection.cpp Merge branch 4.x 2024-01-23 17:06:52 +03:00
object_detection.py Merge branch 4.x 2024-01-19 17:32:22 +03:00
openpose.cpp Merge branch 4.x 2021-12-30 21:43:45 +00:00
openpose.py samples/dnn: better errormsg in openpose.py 2021-05-05 10:39:12 +02:00
optical_flow.py Merge pull request #24913 from usyntest:optical-flow-sample-raft 2024-01-29 17:37:52 +03:00
person_reid.cpp Merge pull request #24231 from fengyuentau:halide_cleanup_5.x 2023-10-13 16:53:18 +03:00
person_reid.py Merge pull request #20175 from rogday:dnn_samples_cuda 2021-06-01 14:00:51 +00:00
README.md Merge branch 4.x 2023-10-23 11:53:04 +03:00
scene_text_detection.cpp samples: replace regex 2020-12-05 12:50:37 +00:00
scene_text_recognition.cpp Merge pull request #17570 from HannibalAPE:text_det_recog_demo 2020-12-03 18:47:40 +00:00
scene_text_spotting.cpp solve Issue 23685 2023-05-25 21:34:51 +02:00
segmentation.cpp Merge pull request #25756 from gursimarsingh:bug_fix/segmentation_sample 2024-07-03 14:03:12 +03:00
segmentation.py Merge pull request #25756 from gursimarsingh:bug_fix/segmentation_sample 2024-07-03 14:03:12 +03:00
shrink_tf_graph_weights.py Text TensorFlow graphs parsing. MobileNet-SSD for 90 classes. 2017-10-08 22:25:29 +03:00
siamrpnpp.py Merge pull request #24231 from fengyuentau:halide_cleanup_5.x 2023-10-13 16:53:18 +03:00
speech_recognition.cpp dnn: fix various dnn related typos 2022-03-23 18:12:12 -04:00
speech_recognition.py dnn: fix various dnn related typos 2022-03-23 18:12:12 -04:00
text_detection.cpp Update text_detection.cpp 2024-05-02 11:38:18 -04:00
text_detection.py Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-02-11 17:32:37 +00:00
tf_text_graph_common.py Merge pull request #19417 from LupusSanctus:am/text_graph_identity 2021-02-17 18:01:41 +00:00
tf_text_graph_efficientdet.py dnn: EfficientDet 2020-05-28 17:23:42 +03:00
tf_text_graph_faster_rcnn.py StridedSlice from TensorFlow 2019-05-22 12:45:52 +03:00
tf_text_graph_mask_rcnn.py Enable ResNet-based Mask-RCNN models from TensorFlow Object Detection API 2019-02-06 13:05:11 +03:00
tf_text_graph_ssd.py Use ==/!= to compare constant literals (str, bytes, int, float, tuple) 2021-11-25 15:39:58 +01:00
virtual_try_on.py Merge pull request #24231 from fengyuentau:halide_cleanup_5.x 2023-10-13 16:53:18 +03:00
vit_tracker.cpp Merge pull request #25771 from fengyuentau:vittrack_black_input 2024-06-18 12:48:28 +03:00
yolo_detector.cpp Merge pull request #25794 from Abdurrahheem:ash/yolov10-support 2024-07-02 18:26:34 +03:00

OpenCV deep learning module samples

Model Zoo

Check a wiki for a list of tested models.

If OpenCV is built with Intel's Inference Engine support you can use Intel's pre-trained models.

There are different preprocessing parameters such mean subtraction or scale factors for different models. You may check the most popular models and their parameters at models.yml configuration file. It might be also used for aliasing samples parameters. In example,

python object_detection.py opencv_fd --model /path/to/caffemodel --config /path/to/prototxt

Check -h option to know which values are used by default:

python object_detection.py opencv_fd -h

Sample models

You can download sample models using download_models.py. For example, the following command will download network weights for OpenCV Face Detector model and store them in FaceDetector folder:

python download_models.py --save_dir FaceDetector opencv_fd

You can use default configuration files adopted for OpenCV from here.

You also can use the script to download necessary files from your code. Assume you have the following code inside your_script.py:

from download_models import downloadFile

filepath1 = downloadFile("https://drive.google.com/uc?export=download&id=0B3gersZ2cHIxRm5PMWRoTkdHdHc", None, filename="MobileNetSSD_deploy.caffemodel", save_dir="save_dir_1")
filepath2 = downloadFile("https://drive.google.com/uc?export=download&id=0B3gersZ2cHIxRm5PMWRoTkdHdHc", "994d30a8afaa9e754d17d2373b2d62a7dfbaaf7a", filename="MobileNetSSD_deploy.caffemodel")
print(filepath1)
print(filepath2)
# Your code

By running the following commands, you will get MobileNetSSD_deploy.caffemodel file:

export OPENCV_DOWNLOAD_DATA_PATH=download_folder
python your_script.py

Note that you can provide a directory using save_dir parameter or via OPENCV_SAVE_DIR environment variable.

Face detection

An origin model with single precision floating point weights has been quantized using TensorFlow framework. To achieve the best accuracy run the model on BGR images resized to 300x300 applying mean subtraction of values (104, 177, 123) for each blue, green and red channels correspondingly.

The following are accuracy metrics obtained using COCO object detection evaluation tool on FDDB dataset (see script) applying resize to 300x300 and keeping an origin images' sizes.

AP - Average Precision                            | FP32/FP16 | UINT8          | FP32/FP16 | UINT8          |
AR - Average Recall                               | 300x300   | 300x300        | any size  | any size       |
--------------------------------------------------|-----------|----------------|-----------|----------------|
AP @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] | 0.408     | 0.408          | 0.378     | 0.328 (-0.050) |
AP @[ IoU=0.50      | area=   all | maxDets=100 ] | 0.849     | 0.849          | 0.797     | 0.790 (-0.007) |
AP @[ IoU=0.75      | area=   all | maxDets=100 ] | 0.251     | 0.251          | 0.208     | 0.140 (-0.068) |
AP @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.050     | 0.051 (+0.001) | 0.107     | 0.070 (-0.037) |
AP @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.381     | 0.379 (-0.002) | 0.380     | 0.368 (-0.012) |
AP @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.455     | 0.455          | 0.412     | 0.337 (-0.075) |
AR @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] | 0.299     | 0.299          | 0.279     | 0.246 (-0.033) |
AR @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] | 0.482     | 0.482          | 0.476     | 0.436 (-0.040) |
AR @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] | 0.496     | 0.496          | 0.491     | 0.451 (-0.040) |
AR @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.189     | 0.193 (+0.004) | 0.284     | 0.232 (-0.052) |
AR @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.481     | 0.480 (-0.001) | 0.470     | 0.458 (-0.012) |
AR @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.528     | 0.528          | 0.520     | 0.462 (-0.058) |

References