dnn(eltwise): fix handling of different number of channels
* dnn(test): reproducer for Eltwise layer issue from PR16063
* dnn(eltwise): rework support for inputs with different channels
* dnn(eltwise): get rid of finalize(), variableChannels
* dnn(eltwise): update input sorting by number of channels
- do not swap inputs if number of channels are same after truncation
* dnn(test): skip "shortcut" with batch size 2 on MYRIAD targets
* Added Swish and Mish activations
* Fixed whitespace errors
* Kernel implementation done
* Added function for launching kernel
* Changed type of 1.0
* Attempt to add test for Swish and Mish
* Resolving type mismatch for log
* exp from device
* Use log1pexp instead of adding 1
* Added openCL kernels
Asynchronous API from Intel's Inference Engine (#13694)
* Add forwardAsync for asynchronous mode from Intel's Inference Engine
* Python test for forwardAsync
* Replace Future_Mat to AsyncMat
* Shadow AsyncMat
* Isolate InferRequest callback
* Manage exceptions in Async API of IE
* Fix precision in tests for MyriadX
* Fix ONNX tests
* Add output range in ONNX tests
* Skip tests on Myriad OpenVINO 2018R5
* Add detect MyriadX
* Add detect MyriadX on OpenVINO R5
* Skip tests on Myriad next version of OpenVINO
* dnn(ie): VPU type from environment variable
* dnn(test): validate VPU type
* dnn(test): update DLIE test skip conditions
* dnn: Add a Vulkan based backend
This commit adds a new backend "DNN_BACKEND_VKCOM" and a
new target "DNN_TARGET_VULKAN". VKCOM means vulkan based
computation library.
This backend uses Vulkan API and SPIR-V shaders to do
the inference computation for layers. The layer types
that implemented in DNN_BACKEND_VKCOM include:
Conv, Concat, ReLU, LRN, PriorBox, Softmax, MaxPooling,
AvePooling, Permute
This is just a beginning work for Vulkan in OpenCV DNN,
more layer types will be supported and performance
tuning is on the way.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/vulkan: Add FindVulkan.cmake to detect Vulkan SDK
In order to build dnn with Vulkan support, need installing
Vulkan SDK and setting environment variable "VULKAN_SDK" and
add "-DWITH_VULKAN=ON" to cmake command.
You can download Vulkan SDK from:
https://vulkan.lunarg.com/sdk/home#linux
For how to install, see
https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.htmlhttps://vulkan.lunarg.com/doc/sdk/latest/windows/getting_started.htmlhttps://vulkan.lunarg.com/doc/sdk/latest/mac/getting_started.html
respectively for linux, windows and mac.
To run the vulkan backend, also need installing mesa driver.
On Ubuntu, use this command 'sudo apt-get install mesa-vulkan-drivers'
To test, use command '$BUILD_DIR/bin/opencv_test_dnn --gtest_filter=*VkCom*'
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: dynamically load Vulkan runtime
No compile-time dependency on Vulkan library.
If Vulkan runtime is unavailable, fallback to CPU path.
Use environment "OPENCL_VULKAN_RUNTIME" to specify path to your
own vulkan runtime library.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: Add a python script to compile GLSL shaders to SPIR-V shaders
The SPIR-V shaders are in format of text-based 32-bit hexadecimal
numbers, and inserted into .cpp files as unsigned int32 array.
* dnn/Vulkan: Put Vulkan headers into 3rdparty directory and some other fixes
Vulkan header files are copied from
https://github.com/KhronosGroup/Vulkan-Docs/tree/master/include/vulkan
to 3rdparty/include
Fix the Copyright declaration issue.
Refine OpenCVDetectVulkan.cmake
* dnn/Vulkan: Add vulkan backend tests into existing ones.
Also fixed some test failures.
- Don't use bool variable as uniform for shader
- Fix dispathed group number beyond max issue
- Bypass "group > 1" convolution. This should be support in future.
* dnn/Vulkan: Fix multiple initialization in one thread.
Support asymmetric padding in pooling layer (#12519)
* Add Inception_V1 support in ONNX
* Add asymmetric padding in OpenCL and Inference engine
* Refactoring
Feature/region layer batch mode (#12249)
* Add batch mode for Darknet networks.
Swap variables in test_darknet.
Adapt reorg layer to batch mode.
Adapt region layer.
Add OpenCL implementation.
Remove trailing whitespace.
Bugifx reorg opencl implementation.
Fix bug in OpenCL reorg.
Fix modulo bug.
Fix bug.
Reorg openCL.
Restore reorg layer opencl code.
OpenCl fix.
Work on openCL reorg.
Remove whitespace.
Fix openCL region layer implementation.
Fix bug.
Fix softmax region opencl bug.
Fix opencl bug.
Fix openCL bug.
Update aff_trans.cpp
When the fullAffine parameter is set to false, the estimateRigidTransform function maybe return empty, then the _localAffineEstimate function will be called, but the bug in it will result in incorrect results.
core(libva): support YV12 too
Added to CPU path only.
OpenCL code path still expects NV12 only (according to Intel OpenCL extension)
cmake: allow to specify own libva paths
via CMake:
- `-DVA_LIBRARIES=/opt/intel/mediasdk/lib64/libva.so.2\;/opt/intel/mediasdk/lib64/libva-drm.so.2`
android: NDK17 support
tested with NDK 17b (17.1.4828580)
Enable more deep learning tests using Intel's Inference Engine backend
ts: don't pass NULL for std::string() constructor
openvino: use 2018R3 defines
experimental version++
OpenCV version++
OpenCV 3.4.3
OpenCV version '-openvino'
openvino: use 2018R3 defines
Fixed windows build with InferenceEngine
dnn: fix variance setting bug for PriorBoxLayer
- The size of second channel should be size[2] of output tensor,
- The Scalar should be {variance[0], variance[0], variance[0], variance[0]}
for _variance.size() == 1 case.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Fix lifetime of networks which are loaded from Model Optimizer IRs
Adds a small note describing BUILD_opencv_world (#12332)
* Added a mall note describing BUILD_opencv_world cmake option to the Installation in Windows tutorial.
* Made slight changes in BUILD_opencv_world documentation.
* Update windows_install.markdown
improved grammar
Update opengl_interop.cpp
resolves#12307
java: fix LIST_GET macro
fix typo
Added option to fail on missing testdata
Fixed that object_detection.py does not work in python3.
cleanup: IPP Async (IPP_A)
except header file with conversion routines (will be removed in OpenCV 4.0)
imgcodecs: add null pointer check
Include preprocessing nodes to object detection TensorFlow networks (#12211)
* Include preprocessing nodes to object detection TensorFlow networks
* Enable more fusion
* faster_rcnn_resnet50_coco_2018_01_28 test
countNonZero function reworked to use wide universal intrinsics instead of SSE2 intrinsics
resolve#5788
imgcodecs(webp): multiple fixes
- don't reallocate passed 'img' (test fixed - must use IMREAD_UNCHANGED / IMREAD_ANYCOLOR)
- avoid memory DDOS
- avoid reading of whole file during header processing
- avoid data access after allocated buffer during header processing (missing checks)
- use WebPFree() to free allocated buffers (libwebp >= 0.5.0)
- drop unused & undefined `.close()` method
- added checks for channels >= 5 in encoder
ml: fix adjusting K in KNearest (#12358)
dnn(perf): fix and merge Convolution tests
- OpenCL tests didn't run any OpenCL kernels
- use real configuration from existed models (the first 100 cases)
- batch size = 1
dnn(test): use dnnBackendsAndTargets() param generator
Bit-exact resize reworked to use wide intrinsics (#12038)
* Bit-exact resize reworked to use wide intrinsics
* Reworked bit-exact resize row data loading
* Added bit-exact resize row data loaders for SIMD256 and SIMD512
* Fixed type punned pointer dereferencing warning
* Reworked loading of source data for SIMD256 and SIMD512 bit-exact resize
Bit-exact GaussianBlur reworked to use wide intrinsics (#12073)
* Bit-exact GaussianBlur reworked to use wide intrinsics
* Added v_mul_hi universal intrinsic
* Removed custom SSE2 branch from bit-exact GaussianBlur
* Removed loop unrolling for gaussianBlur horizontal smoothing
doc: fix English gramma in tutorial out-of-focus-deblur filter (#12214)
* doc: fix English gramma in tutorial out-of-focus-deblur filter
* Update out_of_focus_deblur_filter.markdown
slightly modified one sentence
doc: add new tutorial motion deblur filter (#12215)
* doc: add new tutorial motion deblur filter
* Update motion_deblur_filter.markdown
a few minor changes
Replace Slice layer to Crop in Faster-RCNN networks from Caffe
js: use generated list of OpenCV headers
- replaces hand-written list
imgcodecs(webp): use safe cast to size_t on Win32
* Put Version status back to -dev.
follow the common codestyle
Exclude some target engines.
Refactor formulas.
Refactor code.
* Remove unused variable.
* Remove inference engine check for yolov2.
* Alter darknet batch tests to test with two different images.
* Add yolov3 second image GT.
* Fix bug.
* Fix bug.
* Add second test.
* Remove comment.
* Add NMS on network level.
* Add helper files to dev.
* syntax fix.
* Fix OD sample.
Fix sample dnn object detection.
Fix NMS boxes bug.
remove trailing whitespace.
Remove debug function.
Change thresholds for opencl tests.
* Adapt score diff and iou diff.
* Alter iouDiffs.
* Add debug messages.
* Adapt iouDiff.
* Fix tests
* Add Squeezenet support in ONNX
* Add AlexNet support in ONNX
* Add Googlenet support in ONNX
* Add CaffeNet and RCNN support in ONNX
* Add VGG16 and VGG16 with batch normalization support in ONNX
* Add RCNN, ZFNet, ResNet18v1 and ResNet50v1 support in ONNX
* Add ResNet101_DUC_HDC
* Add Tiny Yolov2
* Add CNN_MNIST, MobileNetv2 and LResNet100 support in ONNX
* Add ONNX models for emotion recognition
* Add DenseNet121 support in ONNX
* Add Inception v1 support in ONNX
* Refactoring
* Fix tests
* Fix tests
* Skip unstable test
* Modify Reshape operation
* Remove a forward method in dnn::Layer
* Add a test
* Fix tests
* Mark multiple dnn::Layer::finalize methods as deprecated
* Replace back dnn's inputBlobs to vector of pointers
* Remove Layer::forward_fallback from CV_OCL_RUN scopes