Feature/region layer batch mode (#12249)
* Add batch mode for Darknet networks.
Swap variables in test_darknet.
Adapt reorg layer to batch mode.
Adapt region layer.
Add OpenCL implementation.
Remove trailing whitespace.
Bugifx reorg opencl implementation.
Fix bug in OpenCL reorg.
Fix modulo bug.
Fix bug.
Reorg openCL.
Restore reorg layer opencl code.
OpenCl fix.
Work on openCL reorg.
Remove whitespace.
Fix openCL region layer implementation.
Fix bug.
Fix softmax region opencl bug.
Fix opencl bug.
Fix openCL bug.
Update aff_trans.cpp
When the fullAffine parameter is set to false, the estimateRigidTransform function maybe return empty, then the _localAffineEstimate function will be called, but the bug in it will result in incorrect results.
core(libva): support YV12 too
Added to CPU path only.
OpenCL code path still expects NV12 only (according to Intel OpenCL extension)
cmake: allow to specify own libva paths
via CMake:
- `-DVA_LIBRARIES=/opt/intel/mediasdk/lib64/libva.so.2\;/opt/intel/mediasdk/lib64/libva-drm.so.2`
android: NDK17 support
tested with NDK 17b (17.1.4828580)
Enable more deep learning tests using Intel's Inference Engine backend
ts: don't pass NULL for std::string() constructor
openvino: use 2018R3 defines
experimental version++
OpenCV version++
OpenCV 3.4.3
OpenCV version '-openvino'
openvino: use 2018R3 defines
Fixed windows build with InferenceEngine
dnn: fix variance setting bug for PriorBoxLayer
- The size of second channel should be size[2] of output tensor,
- The Scalar should be {variance[0], variance[0], variance[0], variance[0]}
for _variance.size() == 1 case.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Fix lifetime of networks which are loaded from Model Optimizer IRs
Adds a small note describing BUILD_opencv_world (#12332)
* Added a mall note describing BUILD_opencv_world cmake option to the Installation in Windows tutorial.
* Made slight changes in BUILD_opencv_world documentation.
* Update windows_install.markdown
improved grammar
Update opengl_interop.cpp
resolves#12307
java: fix LIST_GET macro
fix typo
Added option to fail on missing testdata
Fixed that object_detection.py does not work in python3.
cleanup: IPP Async (IPP_A)
except header file with conversion routines (will be removed in OpenCV 4.0)
imgcodecs: add null pointer check
Include preprocessing nodes to object detection TensorFlow networks (#12211)
* Include preprocessing nodes to object detection TensorFlow networks
* Enable more fusion
* faster_rcnn_resnet50_coco_2018_01_28 test
countNonZero function reworked to use wide universal intrinsics instead of SSE2 intrinsics
resolve#5788
imgcodecs(webp): multiple fixes
- don't reallocate passed 'img' (test fixed - must use IMREAD_UNCHANGED / IMREAD_ANYCOLOR)
- avoid memory DDOS
- avoid reading of whole file during header processing
- avoid data access after allocated buffer during header processing (missing checks)
- use WebPFree() to free allocated buffers (libwebp >= 0.5.0)
- drop unused & undefined `.close()` method
- added checks for channels >= 5 in encoder
ml: fix adjusting K in KNearest (#12358)
dnn(perf): fix and merge Convolution tests
- OpenCL tests didn't run any OpenCL kernels
- use real configuration from existed models (the first 100 cases)
- batch size = 1
dnn(test): use dnnBackendsAndTargets() param generator
Bit-exact resize reworked to use wide intrinsics (#12038)
* Bit-exact resize reworked to use wide intrinsics
* Reworked bit-exact resize row data loading
* Added bit-exact resize row data loaders for SIMD256 and SIMD512
* Fixed type punned pointer dereferencing warning
* Reworked loading of source data for SIMD256 and SIMD512 bit-exact resize
Bit-exact GaussianBlur reworked to use wide intrinsics (#12073)
* Bit-exact GaussianBlur reworked to use wide intrinsics
* Added v_mul_hi universal intrinsic
* Removed custom SSE2 branch from bit-exact GaussianBlur
* Removed loop unrolling for gaussianBlur horizontal smoothing
doc: fix English gramma in tutorial out-of-focus-deblur filter (#12214)
* doc: fix English gramma in tutorial out-of-focus-deblur filter
* Update out_of_focus_deblur_filter.markdown
slightly modified one sentence
doc: add new tutorial motion deblur filter (#12215)
* doc: add new tutorial motion deblur filter
* Update motion_deblur_filter.markdown
a few minor changes
Replace Slice layer to Crop in Faster-RCNN networks from Caffe
js: use generated list of OpenCV headers
- replaces hand-written list
imgcodecs(webp): use safe cast to size_t on Win32
* Put Version status back to -dev.
follow the common codestyle
Exclude some target engines.
Refactor formulas.
Refactor code.
* Remove unused variable.
* Remove inference engine check for yolov2.
* Alter darknet batch tests to test with two different images.
* Add yolov3 second image GT.
* Fix bug.
* Fix bug.
* Add second test.
* Remove comment.
* Add NMS on network level.
* Add helper files to dev.
* syntax fix.
* Fix OD sample.
Fix sample dnn object detection.
Fix NMS boxes bug.
remove trailing whitespace.
Remove debug function.
Change thresholds for opencl tests.
* Adapt score diff and iou diff.
* Alter iouDiffs.
* Add debug messages.
* Adapt iouDiff.
* Fix tests
* Add Squeezenet support in ONNX
* Add AlexNet support in ONNX
* Add Googlenet support in ONNX
* Add CaffeNet and RCNN support in ONNX
* Add VGG16 and VGG16 with batch normalization support in ONNX
* Add RCNN, ZFNet, ResNet18v1 and ResNet50v1 support in ONNX
* Add ResNet101_DUC_HDC
* Add Tiny Yolov2
* Add CNN_MNIST, MobileNetv2 and LResNet100 support in ONNX
* Add ONNX models for emotion recognition
* Add DenseNet121 support in ONNX
* Add Inception v1 support in ONNX
* Refactoring
* Fix tests
* Fix tests
* Skip unstable test
* Modify Reshape operation
* Remove a forward method in dnn::Layer
* Add a test
* Fix tests
* Mark multiple dnn::Layer::finalize methods as deprecated
* Replace back dnn's inputBlobs to vector of pointers
* Remove Layer::forward_fallback from CV_OCL_RUN scopes
Remove some assertions
Replace std::ifstream to std::istream
Add test for new importer
Remove constructor to load file
Rename cfgStream and darknetModelStream to ifile
Add error notification to inform pathname to user
Use FileStorage instead of std::istream
Use FileNode instead of FileStorage
Fix typo
* Added ResizeBilinear op for tf
Combined ResizeNearestNeighbor and ResizeBilinear layers into Resize (with an interpolation param).
Minor changes to tf_importer and resize layer to save some code lines
Minor changes in init.cpp
Minor changes in tf_importer.cpp
* Replaced implementation of a custom ResizeBilinear layer to all layers
* Use Mat::ptr. Replace interpolation flags
* Added the imagesFromBlob method to the dnn module.
* Rewritten imagesFromBlob based on first dkurt comments
* Updated code with getPlane()
* Modify comment of imagesFromBlob() in dnn module
* modified comments, removed useless assertions & added OutputArrayOfArray
* replaced tabs with whitespaces & put vectorOfChannels instantiation outside the loop
* Changed pre-commit.sample to pre-commit in .git/hooks/
* Added a test for imagesFromBlob in test_misc.cpp (dnn)
* Changed nbOfImages, robustified test with cv::randu, modified assertion
Add layer forward interface with InputArrayOfArrays and
OutputArrayOfArrays parameters, it allows UMat buffer to be
processed and transferred in the layers.
Signed-off-by: Li Peng <peng.li@intel.com>
add libdnn acceleration to dnn module (#9114)
* import libdnn code
Signed-off-by: Li Peng <peng.li@intel.com>
* add convolution layer ocl acceleration
Signed-off-by: Li Peng <peng.li@intel.com>
* add pooling layer ocl acceleration
Signed-off-by: Li Peng <peng.li@intel.com>
* add softmax layer ocl acceleration
Signed-off-by: Li Peng <peng.li@intel.com>
* add lrn layer ocl acceleration
Signed-off-by: Li Peng <peng.li@intel.com>
* add innerproduct layer ocl acceleration
Signed-off-by: Li Peng <peng.li@intel.com>
* add HAVE_OPENCL macro
Signed-off-by: Li Peng <peng.li@intel.com>
* fix for convolution ocl
Signed-off-by: Li Peng <peng.li@intel.com>
* enable getUMat() for multi-dimension Mat
Signed-off-by: Li Peng <peng.li@intel.com>
* use getUMat for ocl acceleration
Signed-off-by: Li Peng <peng.li@intel.com>
* use CV_OCL_RUN macro
Signed-off-by: Li Peng <peng.li@intel.com>
* set OPENCL target when it is available
and disable fuseLayer for OCL target for the time being
Signed-off-by: Li Peng <peng.li@intel.com>
* fix innerproduct accuracy test
Signed-off-by: Li Peng <peng.li@intel.com>
* remove trailing space
Signed-off-by: Li Peng <peng.li@intel.com>
* Fixed tensorflow demo bug.
Root cause is that tensorflow has different algorithm with libdnn
to calculate convolution output dimension.
libdnn don't calculate output dimension anymore and just use one
passed in by config.
* split gemm ocl file
split it into gemm_buffer.cl and gemm_image.cl
Signed-off-by: Li Peng <peng.li@intel.com>
* Fix compile failure
Signed-off-by: Li Peng <peng.li@intel.com>
* check env flag for auto tuning
Signed-off-by: Li Peng <peng.li@intel.com>
* switch to new ocl kernels for softmax layer
Signed-off-by: Li Peng <peng.li@intel.com>
* update softmax layer
on some platform subgroup extension may not work well,
fallback to non subgroup ocl acceleration.
Signed-off-by: Li Peng <peng.li@intel.com>
* fallback to cpu path for fc layer with multi output
Signed-off-by: Li Peng <peng.li@intel.com>
* update output message
Signed-off-by: Li Peng <peng.li@intel.com>
* update fully connected layer
fallback to gemm API if libdnn return false
Signed-off-by: Li Peng <peng.li@intel.com>
* Add ReLU OCL implementation
* disable layer fusion for now
Signed-off-by: Li Peng <peng.li@intel.com>
* Add OCL implementation for concat layer
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* libdnn: update license and copyrights
Also refine libdnn coding style
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Li Peng <peng.li@intel.com>
* DNN: Don't link OpenCL library explicitly
* DNN: Make default preferableTarget to DNN_TARGET_CPU
User should set it to DNN_TARGET_OPENCL explicitly if want to
use OpenCL acceleration.
Also don't fusion when using DNN_TARGET_OPENCL
* DNN: refine coding style
* Add getOpenCLErrorString
* DNN: Use int32_t/uint32_t instread of alias
* Use namespace ocl4dnn to include libdnn things
* remove extra copyTo in softmax ocl path
Signed-off-by: Li Peng <peng.li@intel.com>
* update ReLU layer ocl path
Signed-off-by: Li Peng <peng.li@intel.com>
* Add prefer target property for layer class
It is used to indicate the target for layer forwarding,
either the default CPU target or OCL target.
Signed-off-by: Li Peng <peng.li@intel.com>
* Add cl_event based timer for cv::ocl
* Rename libdnn to ocl4dnn
Signed-off-by: Li Peng <peng.li@intel.com>
Signed-off-by: wzw <zhiwen.wu@intel.com>
* use UMat for ocl4dnn internal buffer
Remove allocateMemory which use clCreateBuffer directly
Signed-off-by: Li Peng <peng.li@intel.com>
Signed-off-by: wzw <zhiwen.wu@intel.com>
* enable buffer gemm in ocl4dnn innerproduct
Signed-off-by: Li Peng <peng.li@intel.com>
* replace int_tp globally for ocl4dnn kernels.
Signed-off-by: wzw <zhiwen.wu@intel.com>
Signed-off-by: Li Peng <peng.li@intel.com>
* create UMat for layer params
Signed-off-by: Li Peng <peng.li@intel.com>
* update sign ocl kernel
Signed-off-by: Li Peng <peng.li@intel.com>
* update image based gemm of inner product layer
Signed-off-by: Li Peng <peng.li@intel.com>
* remove buffer gemm of inner product layer
call cv::gemm API instead
Signed-off-by: Li Peng <peng.li@intel.com>
* change ocl4dnn forward parameter to UMat
Signed-off-by: Li Peng <peng.li@intel.com>
* Refine auto-tuning mechanism.
- Use OPENCV_OCL4DNN_KERNEL_CONFIG_PATH to set cache directory
for fine-tuned kernel configuration.
e.g. export OPENCV_OCL4DNN_KERNEL_CONFIG_PATH=/home/tmp,
the cache directory will be /home/tmp/spatialkernels/ on Linux.
- Define environment OPENCV_OCL4DNN_ENABLE_AUTO_TUNING to enable
auto-tuning.
- OPENCV_OPENCL_ENABLE_PROFILING is only used to enable profiling
for OpenCL command queue. This fix basic kernel get wrong running
time, i.e. 0ms.
- If creating cache directory failed, disable auto-tuning.
* Detect and create cache dir on windows
Signed-off-by: Li Peng <peng.li@intel.com>
* Refine gemm like convolution kernel.
Signed-off-by: Li Peng <peng.li@intel.com>
* Fix redundant swizzleWeights calling when use cached kernel config.
* Fix "out of resource" bug when auto-tuning too many kernels.
* replace cl_mem with UMat in ocl4dnnConvSpatial class
* OCL4DNN: reduce the tuning kernel candidate.
This patch could reduce 75% of the tuning candidates with less
than 2% performance impact for the final result.
Signed-off-by: Zhigang Gong <zhigang.gong@intel.com>
* replace cl_mem with umat in ocl4dnn convolution
Signed-off-by: Li Peng <peng.li@intel.com>
* remove weight_image_ of ocl4dnn inner product
Actually it is unused in the computation
Signed-off-by: Li Peng <peng.li@intel.com>
* Various fixes for ocl4dnn
1. OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel())
2. Ptr<OCL4DNNInnerProduct<float> > innerProductOp
3. Code comments cleanup
4. ignore check on OCL cpu device
Signed-off-by: Li Peng <peng.li@intel.com>
* add build option for log softmax
Signed-off-by: Li Peng <peng.li@intel.com>
* remove unused ocl kernels in ocl4dnn
Signed-off-by: Li Peng <peng.li@intel.com>
* replace ocl4dnnSet with opencv setTo
Signed-off-by: Li Peng <peng.li@intel.com>
* replace ALIGN with cv::alignSize
Signed-off-by: Li Peng <peng.li@intel.com>
* check kernel build options
Signed-off-by: Li Peng <peng.li@intel.com>
* Handle program compilation fail properly.
* Use std::numeric_limits<float>::infinity() for large float number
* check ocl4dnn kernel compilation result
Signed-off-by: Li Peng <peng.li@intel.com>
* remove unused ctx_id
Signed-off-by: Li Peng <peng.li@intel.com>
* change clEnqueueNDRangeKernel to kernel.run()
Signed-off-by: Li Peng <peng.li@intel.com>
* change cl_mem to UMat in image based gemm
Signed-off-by: Li Peng <peng.li@intel.com>
* check intel subgroup support for lrn and pooling layer
Signed-off-by: Li Peng <peng.li@intel.com>
* Fix convolution bug if group is greater than 1
Signed-off-by: Li Peng <peng.li@intel.com>
* Set default layer preferableTarget to be DNN_TARGET_CPU
Signed-off-by: Li Peng <peng.li@intel.com>
* Add ocl perf test for convolution
Signed-off-by: Li Peng <peng.li@intel.com>
* Add more ocl accuracy test
Signed-off-by: Li Peng <peng.li@intel.com>
* replace cl_image with ocl::Image2D
Signed-off-by: Li Peng <peng.li@intel.com>
* Fix build failure in elementwise layer
Signed-off-by: Li Peng <peng.li@intel.com>
* use getUMat() to get blob data
Signed-off-by: Li Peng <peng.li@intel.com>
* replace cl_mem handle with ocl::KernelArg
Signed-off-by: Li Peng <peng.li@intel.com>
* dnn(build): don't use C++11, OPENCL_LIBRARIES fix
* dnn(ocl4dnn): remove unused OpenCL kernels
* dnn(ocl4dnn): extract OpenCL code into .cl files
* dnn(ocl4dnn): refine auto-tuning
Defaultly disable auto-tuning, set OPENCV_OCL4DNN_ENABLE_AUTO_TUNING
environment variable to enable it.
Use a set of pre-tuned configs as default config if auto-tuning is disabled.
These configs are tuned for Intel GPU with 48/72 EUs, and for googlenet,
AlexNet, ResNet-50
If default config is not suitable, use the first available kernel config
from the candidates. Candidate priority from high to low is gemm like kernel,
IDLF kernel, basick kernel.
* dnn(ocl4dnn): pooling doesn't use OpenCL subgroups
* dnn(ocl4dnn): fix perf test
OpenCV has default 3sec time limit for each performance test.
Warmup OpenCL backend outside of perf measurement loop.
* use ocl::KernelArg as much as possible
Signed-off-by: Li Peng <peng.li@intel.com>
* dnn(ocl4dnn): fix bias bug for gemm like kernel
* dnn(ocl4dnn): wrap cl_mem into UMat
Signed-off-by: Li Peng <peng.li@intel.com>
* dnn(ocl4dnn): Refine signature of kernel config
- Use more readable string as signture of kernel config
- Don't count device name and vendor in signature string
- Default kernel configurations are tuned for Intel GPU with
24/48/72 EUs, and for googlenet, AlexNet, ResNet-50 net model.
* dnn(ocl4dnn): swap width/height in configuration
* dnn(ocl4dnn): enable configs for Intel OpenCL runtime only
* core: make configuration helper functions accessible from non-core modules
* dnn(ocl4dnn): update kernel auto-tuning behavior
Avoid unwanted creation of directories
* dnn(ocl4dnn): simplify kernel to workaround OpenCL compiler crash
* dnn(ocl4dnn): remove redundant code
* dnn(ocl4dnn): Add more clear message for simd size dismatch.
* dnn(ocl4dnn): add const to const argument
Signed-off-by: Li Peng <peng.li@intel.com>
* dnn(ocl4dnn): force compiler use a specific SIMD size for IDLF kernel
* dnn(ocl4dnn): drop unused tuneLocalSize()
* dnn(ocl4dnn): specify OpenCL queue for Timer and convolve() method
* dnn(ocl4dnn): sanitize file names used for cache
* dnn(perf): enable Network tests with OpenCL
* dnn(ocl4dnn/conv): drop computeGlobalSize()
* dnn(ocl4dnn/conv): drop unused fields
* dnn(ocl4dnn/conv): simplify ctor
* dnn(ocl4dnn/conv): refactor kernelConfig localSize=NULL
* dnn(ocl4dnn/conv): drop unsupported double / untested half types
* dnn(ocl4dnn/conv): drop unused variable
* dnn(ocl4dnn/conv): alignSize/divUp
* dnn(ocl4dnn/conv): use enum values
* dnn(ocl4dnn): drop unused innerproduct variable
Signed-off-by: Li Peng <peng.li@intel.com>
* dnn(ocl4dnn): add an generic function to check cl option support
* dnn(ocl4dnn): run softmax subgroup version kernel first
Signed-off-by: Li Peng <peng.li@intel.com>
fixed problem in concat layer by disabling memory re-use in layers with multiple inputs
trying to fix the tests when Halide is used to run deep nets
another attempt to fix Halide tests
see if the Halide tests will pass with concat layer fusion turned off
trying to fix failures in halide tests; another try
one more experiment to make halide_concat & halide_enet tests pass
continue attempts to fix halide tests
moving on
uncomment parallel concat layer
seemingly fixed failures in Halide tests and re-enabled concat layer fusion; thanks to dkurt for the patch
Main purpose of this namespace is to avoid using of incompatible
binaries that will cause applications crashes.
This additional namespace will not impact "Source code API".
This change allows to maintain ABI checks (with easy filtering out).
* another round of dnn optimization:
* increased malloc alignment across OpenCV from 16 to 64 bytes to make it AVX2 and even AVX-512 friendly
* improved SIMD optimization of pooling layer, optimized average pooling
* cleaned up convolution layer implementation
* made activation layer "attacheable" to all other layers, including fully connected and addition layer.
* fixed bug in the fusion algorithm: "LayerData::consumers" should not be cleared, because it desctibes the topology.
* greatly optimized permutation layer, which improved SSD performance
* parallelized element-wise binary/ternary/... ops (sum, prod, max)
* also, added missing copyrights to many of the layer implementation files
* temporarily disabled (again) the check for intermediate blobs consistency; fixed warnings from various builders