Commit Graph

210 Commits

Author SHA1 Message Date
Lubov Batanina
7d3d6bc4e2 Merge pull request #13932 from l-bat:MyriadX_master_dldt
* Fix precision in tests for MyriadX

* Fix ONNX tests

* Add output range in ONNX tests

* Skip tests on Myriad OpenVINO 2018R5

* Add detect MyriadX

* Add detect MyriadX on OpenVINO R5

* Skip tests on Myriad next version of OpenVINO

* dnn(ie): VPU type from environment variable

* dnn(test): validate VPU type

* dnn(test): update DLIE test skip conditions
2019-03-29 16:42:58 +03:00
Alexander Alekhin
8bde6aea4b Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-02-19 19:49:13 +00:00
Dmitry Kurtaev
ca5976e3d4 Fix IE backend considering future changes. 2019-02-18 19:26:04 +03:00
Alexander Alekhin
dfef04b325 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-02-12 17:54:40 +03:00
Dmitry Kurtaev
0711dab09d Fix Intel's Inference Engine backend from future. Second try. 2019-02-11 19:47:57 +03:00
Alexander Alekhin
665408e57f Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-02-01 13:17:32 +03:00
Alexander Alekhin
3585522b24 Merge pull request #13692 from dkurt:dnn_do_not_crash_myriad_in_tests 2019-01-28 18:34:20 +00:00
Dmitry Kurtaev
3c3c5ef2b6 Fix a dnn bug with retrieving all the output blobs 2019-01-28 18:48:56 +03:00
Dmitry Kurtaev
ff775b2e54 Remove ASSERT_ANY_THROW checks fpr Myriad plugin and FP32 networks 2019-01-25 20:09:54 +03:00
Alexander Nesterov
97c3bcb1b7 Added fix for other size 2019-01-24 12:51:16 -01:00
Alexander Alekhin
631b246881 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-01-22 18:00:34 +00:00
Dmitry Kurtaev
f0ddf302b2 Move Inference Engine to new API 2019-01-17 14:28:48 +03:00
WuZhiwen
3d44e9ad92 Merge pull request #13520 from wzw-intel:hang
* dnn/Vulkan: fix GPU hang for heavy convolution tasks

Intel i915 driver will declare GPU hang if the compute shader
takes too long to complete. See
https://bugs.freedesktop.org/show_bug.cgi?id=108947 for details.

The idea in this commit is to divide heavy task into several light
ones and run compute shader multiple times to make each run take
short time enough.

TODO: Add more efficient compute shader

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* dnn/Vulkan: add a more efficient conv shader
2018-12-27 15:06:44 +03:00
Wu Zhiwen
be6a837e15 dnn: add Vulkan device check for BackendRegistry 2018-12-24 10:41:58 +08:00
Alexander Alekhin
e82e672a93 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-12-06 07:06:58 +00:00
Alexander Alekhin
9ff1c39daa dnn: fixup available backends/targets 2018-12-05 19:19:17 +03:00
Maksim Shabunin
fe459c82e5 Merge pull request #13332 from mshabunin:dnn-backends
DNN backends registry (#13332)

* Added dnn backends registry

* dnn: process DLIE/FPGA target
2018-12-05 18:11:45 +03:00
WuZhiwen
02cc1cd6e6 Merge pull request #13244 from wzw-intel:init_vulkan
* dnn/Vulkan: don't init Vulkan runtime if using other backend/target

Don't need to explictly call a init API but will automatically
init Vulkan environment the first time to use an VkCom object.

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* dnn/Vulkan: depress compilier warning for "-Wsign-promo"

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
2018-11-22 19:46:30 +03:00
Alexander Alekhin
7fa7fa0226 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-11-21 08:33:39 +00:00
Dmitry Kurtaev
0d117312c9 DNN_TARGET_FPGA using Intel's Inference Engine 2018-11-19 11:41:43 +03:00
Alexander Alekhin
22dbcf98c5 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-11-17 14:17:35 +00:00
Alexander Alekhin
f2bec05e6d Merge pull request #12913 from dkurt:dnn_fix_ie_hyperparams 2018-11-16 18:36:12 +00:00
Dmitry Kurtaev
b5c54e447c Extra hyperparameters for Intel's Inference Engine layers 2018-11-15 20:06:37 +03:00
Alexander Alekhin
96c71dd3d2 dnn: reduce set of ignored warnings 2018-11-15 13:15:59 +03:00
Alexander Alekhin
8409aa9eba Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-11-14 19:41:09 +00:00
Dmitry Kurtaev
80265a0815 Fix a bug with OpenVINO backend 2018-11-14 13:42:06 +03:00
Alexander Alekhin
f5b212a9d4 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-11-12 17:58:45 +03:00
Alexander Alekhin
801c943009 fix coverity reports 2018-11-11 13:51:47 +00:00
Alexander Alekhin
1913482cf5 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-11-10 20:50:26 +00:00
Alexander Alekhin
0c261acf3a Merge pull request #13065 from dkurt:dnn_update_tf_faster_rcnn 2018-11-08 16:31:39 +00:00
Dmitry Kurtaev
dc9e6d3af8 Update a script to generate text graphs for Faster-RCNN networks from TensorFlow 2018-11-07 18:33:01 +03:00
catree
eebf0dd7c9 Fix integer overflow when accumulating timing values. 2018-11-07 13:04:48 +01:00
Wu Zhiwen
33c9d57c6f dnn/Vulkan: skip heavy convolution task
This is a workaround for GPU hang on heavy convolution workload (> 10 GFLOPS).
e.g. ResNet101_DUC_HDC

For the long time task, vkWaitForFences() return without error but next call on
vkQueueSubmit() return -4, i.e. "VK_ERROR_DEVICE_LOST" and driver reports GPU hang.

Need more investigation on root cause of GPU hang and need to optimize convolution shader
to reduce process time.
2018-11-07 16:38:36 +08:00
Wu Zhiwen
3914c17b0d dnn/Vulkan: Refine error handle mechanism
Fallback to OPENCV backend and CPU target if catch exception from
vkcom backend.

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
2018-10-31 09:47:33 +08:00
WuZhiwen
6e3ea8b49d Merge pull request #12703 from wzw-intel:vkcom
* dnn: Add a Vulkan based backend

This commit adds a new backend "DNN_BACKEND_VKCOM" and a
new target "DNN_TARGET_VULKAN". VKCOM means vulkan based
computation library.

This backend uses Vulkan API and SPIR-V shaders to do
the inference computation for layers. The layer types
that implemented in DNN_BACKEND_VKCOM include:
Conv, Concat, ReLU, LRN, PriorBox, Softmax, MaxPooling,
AvePooling, Permute

This is just a beginning work for Vulkan in OpenCV DNN,
more layer types will be supported and performance
tuning is on the way.

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* dnn/vulkan: Add FindVulkan.cmake to detect Vulkan SDK

In order to build dnn with Vulkan support, need installing
Vulkan SDK and setting environment variable "VULKAN_SDK" and
add "-DWITH_VULKAN=ON" to cmake command.

You can download Vulkan SDK from:
https://vulkan.lunarg.com/sdk/home#linux

For how to install, see
https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.html
https://vulkan.lunarg.com/doc/sdk/latest/windows/getting_started.html
https://vulkan.lunarg.com/doc/sdk/latest/mac/getting_started.html
respectively for linux, windows and mac.

To run the vulkan backend, also need installing mesa driver.
On Ubuntu, use this command 'sudo apt-get install mesa-vulkan-drivers'

To test, use command '$BUILD_DIR/bin/opencv_test_dnn --gtest_filter=*VkCom*'

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* dnn/Vulkan: dynamically load Vulkan runtime

No compile-time dependency on Vulkan library.
If Vulkan runtime is unavailable, fallback to CPU path.

Use environment "OPENCL_VULKAN_RUNTIME" to specify path to your
own vulkan runtime library.

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* dnn/Vulkan: Add a python script to compile GLSL shaders to SPIR-V shaders

The SPIR-V shaders are in format of text-based 32-bit hexadecimal
numbers, and inserted into .cpp files as unsigned int32 array.

* dnn/Vulkan: Put Vulkan headers into 3rdparty directory and some other fixes

Vulkan header files are copied from
https://github.com/KhronosGroup/Vulkan-Docs/tree/master/include/vulkan
to 3rdparty/include

Fix the Copyright declaration issue.

Refine OpenCVDetectVulkan.cmake

* dnn/Vulkan: Add vulkan backend tests into existing ones.

Also fixed some test failures.

- Don't use bool variable as uniform for shader
- Fix dispathed group number beyond max issue
- Bypass "group > 1" convolution. This should be support in future.

* dnn/Vulkan: Fix multiple initialization in one thread.
2018-10-29 17:51:26 +03:00
Alexander Alekhin
edacd91a27 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-10-15 20:15:42 +00:00
Dmitry Kurtaev
dc3406eed9 Fix Pooling and Convolution layers from Intel's Inference Engine 2018-10-15 16:40:28 +03:00
Alexander Alekhin
dada5a422d Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-10-09 21:20:15 +00:00
Alexander Alekhin
26ba4f3c1d Merge pull request #12754 from alalek:dnn_ocl4dnn_async_expressions 2018-10-08 15:22:24 +00:00
Alexander Alekhin
634dd656d5 dnn: don't use Mat expressions with async UMat functions 2018-10-05 17:09:50 +03:00
Alexander Alekhin
9d02d42afe dnn(ocl4dnn): don't use getUMat()
especially in CPU only processing
2018-10-05 15:24:51 +03:00
Alexander Alekhin
a8b0db4e5d Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-28 14:14:47 +03:00
Dmitry Kurtaev
24ab751547 Merge pull request #12565 from dkurt:dnn_non_intel_gpu
* Remove isIntel check from deep learning layers

* Remove fp16->fp32 fallbacks where it's not necessary

* Fix Kernel::run to prevent localsize > globalsize
2018-09-26 16:27:00 +03:00
Dmitry Kurtaev
f8398d80bc add Net::getUnconnectedOutLayersNames method 2018-09-25 18:10:45 +03:00
Maksim Shabunin
e0f524d3b7 Fixed several incorrect printf format specifiers 2018-09-24 11:31:40 +03:00
Alexander Alekhin
808ba552c5 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-14 23:44:35 +00:00
Lubov Batanina
0c8590027f Merge pull request #12071 from l-bat/l-bat:onnx_parser
* Add Squeezenet support in ONNX

* Add AlexNet support in ONNX

* Add Googlenet support in ONNX

* Add CaffeNet and RCNN support in ONNX

* Add VGG16 and VGG16 with batch normalization support in ONNX

* Add RCNN, ZFNet, ResNet18v1 and ResNet50v1 support in ONNX

* Add ResNet101_DUC_HDC

* Add Tiny Yolov2

* Add CNN_MNIST, MobileNetv2 and LResNet100 support in ONNX

* Add ONNX models for emotion recognition

* Add DenseNet121 support in ONNX

* Add Inception v1 support in ONNX

* Refactoring

* Fix tests

* Fix tests

* Skip unstable test

* Modify Reshape operation
2018-09-10 21:07:51 +03:00
Alexander Alekhin
dca657a2fd Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-10 00:10:21 +03:00
Hamdi Sahloul
a39e0daacf Utilize CV_UNUSED macro 2018-09-07 20:33:52 +09:00
Alexander Alekhin
73bfe68821 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-07 12:40:27 +03:00
Dmitry Kurtaev
d486204a0d Merge pull request #12264 from dkurt:dnn_remove_forward_method
* Remove a forward method in dnn::Layer

* Add a test

* Fix tests

* Mark multiple dnn::Layer::finalize methods as deprecated

* Replace back dnn's inputBlobs to vector of pointers

* Remove Layer::forward_fallback from CV_OCL_RUN scopes
2018-09-06 13:26:47 +03:00
Alexander Alekhin
d74b98c3d9 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-04 18:39:03 +00:00
Dmitry Kurtaev
27a6be8763 Fix #12407 2018-09-04 17:48:52 +03:00
Alexander Alekhin
f10fd64630 dnn: update "guard" inline namespace
- differ from 3.4 branch
2018-09-03 20:46:57 +00:00
Dmitry Kurtaev
50bceea038 Include preprocessing nodes to object detection TensorFlow networks (#12211)
* Include preprocessing nodes to object detection TensorFlow networks

* Enable more fusion

* faster_rcnn_resnet50_coco_2018_01_28 test
2018-08-31 15:41:56 +03:00
Alexander Alekhin
15e57d28f5 Merge pull request #12293 from alalek:cleanup_stl_string_replacement 2018-08-30 15:43:57 +00:00
Dmitry Kurtaev
3e027df583 Enable more deep learning tests using Intel's Inference Engine backend 2018-08-27 18:37:35 +03:00
Alexander Alekhin
7f73b105ca core: std::string more changes 2018-08-27 15:41:01 +03:00
Alexander Alekhin
096366738b dnn(build): fix CV_Assert() usage 2018-08-22 16:04:40 +03:00
Alexander Alekhin
d2e08a524e core: repair CV_Assert() messages
Multi-argument CV_Assert() is accessible via CV_Assert_N() (with malformed messages).
2018-08-15 17:43:10 +03:00
Vadim Pisarevsky
70b893333d Merge pull request #12130 from dkurt:dnn_ie_mvn 2018-08-06 14:37:46 +00:00
Dmitry Kurtaev
be08730cd6 MVN layer using Intel's Inference Engine backend 2018-08-02 17:49:03 +03:00
Dmitry Kurtaev
8e034053af Faster-RCNN from TensorFlow on CPU with Intel's Inference Engine backend 2018-08-01 11:29:58 +03:00
Alexander Alekhin
9137e2d635 Merge pull request #12060 from alalek:dnn_debug_layers 2018-07-26 15:14:32 +00:00
Dmitry Kurtaev
faa6c4e1e1 Faster-RCNN anf RFCN models on CPU using Intel's Inference Engine backend.
Enable Torch layers tests with Intel's Inference Engine backend.
2018-07-25 19:04:55 +03:00
Alexander Alekhin
45b5b3c13a dnn: check layer output for NaN/Inf 2018-07-25 16:25:18 +03:00
Dmitry Kurtaev
070393dfda uint8 inputs for deep learning networks 2018-07-19 14:37:33 +03:00
Alexander Alekhin
6c4f618db5 Merge pull request #11104 from asciian:reading_from_stream 2018-07-17 16:24:06 +00:00
Li Peng
f0cadaa6e3 enable concat layer fuse for OCL target
Signed-off-by: Li Peng <peng.li@intel.com>
2018-07-17 12:46:16 +08:00
Dmitry Kurtaev
8b5f061dae Replace std::vector<char> to std::vector<uchar> for Java bindings of dnn importers 2018-07-11 18:58:56 +03:00
Dmitry Kurtaev
d57e5406f0 Add readNet* functions which parse models from byte arrays 2018-07-10 11:12:01 +03:00
Dmitry Kurtaev
362d4f5395 Replace convertFp16 from dnn::Net::setInput() 2018-07-09 14:35:54 +03:00
Vadim Pisarevsky
523b6f32ba Merge pull request #11867 from dkurt:dnn_ie_layers 2018-07-06 13:13:20 +00:00
Dmitry Kurtaev
019c2f2115 Enable more deep learning tests 2018-07-05 14:23:15 +03:00
Dmitry Kurtaev
f25a01bb5a Disable fusion to output layers 2018-07-04 15:53:47 +03:00
Alexander Alekhin
f40231af5d Merge pull request #11851 from pengli:3.4 2018-06-29 15:01:20 +00:00
Li Peng
145eae321e pooling ocl kernel optimization
set global size with real output size, also optimize

max pooling index computation if necessary.

Signed-off-by: Li Peng <peng.li@intel.com>
2018-06-29 15:22:49 +08:00
Dmitry Kurtaev
346871e27f Set output layers names and types for models in DLDT's intermediate representation 2018-06-28 10:21:45 +03:00
Dmitry Kurtaev
40b85c1cd9 Remove undocumented feature to retreive layers outputs by indices 2018-06-20 14:44:21 +03:00
Alexander Alekhin
5fd7cfbcad dnn: add runtime parameter OPENCV_DNN_BACKEND_DEFAULT
to control DNN_BACKEND_DEFAULT enumeration value behavior
2018-06-13 19:00:04 +03:00
Dmitry Kurtaev
f3a6ae5f00 Wrap Inference Engine init to try-catch 2018-06-07 12:55:52 +03:00
Vadim Pisarevsky
3cbd2e2764 Merge pull request #11650 from dkurt:dnn_default_backend 2018-06-06 09:30:39 +00:00
Dmitry Kurtaev
b781ac7346 Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
Kuang Fangjun
9ae28415ec fix doc. 2018-06-03 17:44:24 +08:00
Dmitry Kurtaev
32bab45f81 Fix Inference Engine graphs with fused output layers 2018-05-31 16:21:08 +03:00
Dmitry Kurtaev
f96f934426 Update Intel's Inference Engine deep learning backend (#11587)
* Update Intel's Inference Engine deep learning backend

* Remove cpu_extension dependency

* Update Darknet accuracy tests
2018-05-31 14:05:21 +03:00
Alexander Alekhin
471c17321f improve code quality
- eliminate rand() calls
- non initialized members/ variables
- unused return values
- missing/useless NULL checks
2018-05-22 17:08:31 +03:00
Alexander Alekhin
d6279bfff8 fix build warnings 2018-05-17 18:29:21 +03:00
Li Peng
329abb5b64 dnn fp16 support
Signed-off-by: Li Peng <peng.li@intel.com>
2018-05-16 22:44:39 +08:00
Tomoaki Teshima
3f5347dd7a work around of the test failure of opencv_test_dnn
* let OpenCL kernel run only on Intel GPU
  * brush up the workaround based on 9a2b028 from alalek
2018-05-16 19:23:19 +09:00
Alexander Alekhin
576d2dbac0 refactor: don't use CV_ErrorNoReturn() internally 2018-04-24 15:38:42 +03:00
Dmitry Kurtaev
4ec456f0a0 Custom layers for deep learning networks (#11129)
* Custom deep learning layers support

* Stack custom deep learning layers
2018-04-24 14:59:59 +03:00
Dmitry Kurtaev
bd77d100e1 Enable some tests for clDNN plugin from Intel's Inference Engine 2018-04-20 10:47:46 +03:00
Dmitry Kurtaev
97fec07d96 Support YOLOv3 model from Darknet 2018-04-16 18:44:12 +03:00
Vadim Pisarevsky
30175594e9 Merge pull request #11062 from dkurt:dnn_inf_engine_cldnn 2018-04-11 15:06:18 +00:00
Dmitry Kurtaev
4ef6c91583 Fix multiple inputs for models from Intel's Model Optimizer 2018-04-11 13:28:07 +03:00
Dmitry Kurtaev
709cf5d038 OpenCL GPU target for Inference Engine deep learning backend
Enable FP16 GPU target for DL Inference Engine backend.
2018-04-09 17:21:35 +03:00
Alexander Alekhin
1060c0f439 dnn: apply CV_OVERRIDE/CV_FINAL 2018-03-28 18:43:27 +03:00
Dmitry Kurtaev
2f3a9ba1d4 Update OpenCVDetectInferenceEngine.cmake 2018-03-28 16:34:37 +03:00
Dmitry Kurtaev
7972f47ed4 Load networks from intermediate representation of Intel's Deep learning deployment toolkit. 2018-03-26 07:24:21 +03:00