Commit Graph

109 Commits

Author SHA1 Message Date
Pierre Chatelier
6dd8a9b6ad
Merge pull request #13879 from chacha21:REDUCE_SUM2
add REDUCE_SUM2 #13879 

proposal to add REDUCE_SUM2 to cv::reduce, an operation that sums up the square of elements
2023-04-28 20:42:52 +03:00
Alexander Alekhin
583bd1a6e2 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-06-04 19:10:35 +00:00
Alexander Alekhin
65fcf22670 imgproc: use singleton in color_hsv.simd.hpp 2022-06-01 19:02:56 +00:00
Alexander Alekhin
8b4fa2605e Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-12-03 12:32:49 +00:00
Alexander Alekhin
57ee14d62d Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-11-27 16:50:55 +00:00
Alexander Alekhin
61f1ee2d2d core(logger): dump timestamp information with message 2021-11-20 15:34:23 +00:00
Alexander Alekhin
cfb77091ca Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-04-15 20:50:26 +00:00
Alexander Alekhin
222af8e7e4 core: avoid process cleanup deadlock if TlsStorage is not used 2021-04-09 16:08:08 +00:00
Alexander Alekhin
cc73c36e32 core(parallel): plugins support 2021-02-15 17:07:36 +00:00
Alexander Alekhin
2129c72bc0 core(OpenCL): thread-local OpenCL execution context 2020-09-02 05:04:20 +00:00
cyy
171cba4947 use C++11 static variables as memory barrier 2020-06-09 15:49:31 +08:00
Alexander Alekhin
ea5499fa51 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-10-29 20:46:51 +00:00
Alexander Alekhin
17e2bf5717 core(tls): implement releasing of TLS on thread termination
- move TLS & instrumentation code out of core/utility.hpp
- (*) TLSData lost .gather() method (to dispose thread data on thread termination)
- use TLSDataAccumulator for reliable collecting of thread data
- prefer using of .detachData() + .cleanupDetachedData() instead of .gather() method

(*) API is broken: replace TLSData => TLSDataAccumulator if gather required
(objects disposal on threads termination is not available in accumulator mode)
2019-10-24 06:36:18 +00:00
klemens
5d9c6723ee spelling fixes
backport 997b7b18af
2019-02-11 15:35:10 +03:00
klemens
997b7b18af spelling fixes 2019-02-09 22:29:54 +01:00
Alexander Alekhin
1913482cf5 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-11-10 20:50:26 +00:00
Alexander Alekhin
858a7da5c0 core: rework getContinuousSize() for vector-col/row support 2018-11-10 11:08:28 +00:00
Alexander Alekhin
596ada51f3 Merge pull request #13080 from alalek:issue_13078 2018-11-09 13:20:27 +00:00
Alexander Alekhin
5059523937 core: fix processing of vector-rows 2018-11-08 20:04:22 +03:00
Sayed Adel
93ffebc273 core: reimplement SIMD arithmetic, logic and comparison operations into wide universal intrinsics
- initialize arithmetic dispatcher
  - add new universal intrinsic v_absdiffs
  - add new universal intrinsic v_pack_b
  - add accumulate version of universal intrinsic v_round
  - fix sse/avx2:uint8 multiplication overflow
  - reimplement arithmetic, logic and comparison operations into wide universal intrinsics
    with full support for all types
  - reimplement IPP arithmetic, logic and comparison operations in a sperate file arithm_ipp.hpp
  - avoid scalar multiplication if scaling factor eq 1 and use integer multiplication
  - move C arithmetic operations to precomp.hpp and delete [arithm_simd|arithm_core].hpp
  - add compatibility with new opencv4 divide policy
2018-10-30 12:48:31 +02:00
Alexander Alekhin
1ed9ff17e1 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-10-12 10:05:55 +00:00
Alexander Alekhin
70f2ee917e cmake: add DllMain() into each OpenCV DLL
to detect process termination after ExitProcess() call
2018-10-10 11:00:59 +00:00
Hamdi Sahloul
ef5579dc86 Merge pull request #12310 from cv3d:chunks/enum_interface
* Cleanup macros and enable expansion of `__VA_ARGS__` for Visual Studio

* Macros for enum-arguments backwards compatibility

* Convert struct Param to enum struct

* Enabled ParamType.type for enum types

* Enabled `cv.read` and `cv.write` for enum types

* Rename unnamed enum to AAKAZE.DescriptorType

* Rename unnamed enum to AccessFlag

* Rename unnamed enum to AgastFeatureDetector.DetectorType

* Convert struct DrawMatchesFlags to enum struct

* Rename unnamed enum to FastFeatureDetector.DetectorType

* Rename unnamed enum to Formatter.FormatType

* Rename unnamed enum to HOGDescriptor.HistogramNormType

* Rename unnamed enum to DescriptorMatcher.MatcherType

* Rename unnamed enum to KAZE.DiffusivityType

* Rename unnamed enum to ORB.ScoreType

* Rename unnamed enum to UMatData.MemoryFlag

* Rename unnamed enum to _InputArray.KindFlag

* Rename unnamed enum to _OutputArray.DepthMask

* Convert normType enums to static const NormTypes

* Avoid conflicts with ElemType

* Rename unnamed enum to DescriptorStorageFormat
2018-09-21 18:12:35 +03:00
Alexander Alekhin
dca657a2fd Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-10 00:10:21 +03:00
cyy
286c2c236b Merge pull request #12458 from DEEPIR:3.4
* may be an typo fix

* remove identical branch,may be paste error

* add parentheses around macro parameter

* simplify if condition

* check malloc fail

* change the condition of branch removed by commit 3041502861
2018-09-07 18:43:47 +03:00
Alexander Alekhin
db88cd1b25 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-05-21 16:20:14 +03:00
Vadim Pisarevsky
e0dbe5cfcc
handle huge matrices correctly (#11505)
* make sure that the matrix with more than INT_MAX elements is marked as non-continuous, and thus all the pixel-wise functions process it correctly (i.e. row-by-row, not as a single row, where integer overflow may occur when computing the total number of elements)
2018-05-14 15:29:14 +03:00
Alexander Alekhin
5b17a60dde next: drop HAVE_TEGRA_OPTIMIZATION/TADP 2018-04-10 18:09:54 +03:00
Maksim Shabunin
221342fb25 Split convert.cpp into smaller pieces 2018-02-12 15:17:19 +03:00
Maksim Shabunin
904640c9ae Split matrix.cpp into smaller pieces 2018-02-05 19:16:33 +03:00
Sayed Adel
d077778074 Added support for VSX 2017-10-09 00:32:29 +00:00
pengli
e340ff9c3a Merge pull request #9114 from pengli:dnn_rebase
add libdnn acceleration to dnn module  (#9114)

* import libdnn code

Signed-off-by: Li Peng <peng.li@intel.com>

* add convolution layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add pooling layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add softmax layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add lrn layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add innerproduct layer ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* add HAVE_OPENCL macro

Signed-off-by: Li Peng <peng.li@intel.com>

* fix for convolution ocl

Signed-off-by: Li Peng <peng.li@intel.com>

* enable getUMat() for multi-dimension Mat

Signed-off-by: Li Peng <peng.li@intel.com>

* use getUMat for ocl acceleration

Signed-off-by: Li Peng <peng.li@intel.com>

* use CV_OCL_RUN macro

Signed-off-by: Li Peng <peng.li@intel.com>

* set OPENCL target when it is available

and disable fuseLayer for OCL target for the time being

Signed-off-by: Li Peng <peng.li@intel.com>

* fix innerproduct accuracy test

Signed-off-by: Li Peng <peng.li@intel.com>

* remove trailing space

Signed-off-by: Li Peng <peng.li@intel.com>

* Fixed tensorflow demo bug.

Root cause is that tensorflow has different algorithm with libdnn
to calculate convolution output dimension.

libdnn don't calculate output dimension anymore and just use one
passed in by config.

* split gemm ocl file

split it into gemm_buffer.cl and gemm_image.cl

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix compile failure

Signed-off-by: Li Peng <peng.li@intel.com>

* check env flag for auto tuning

Signed-off-by: Li Peng <peng.li@intel.com>

* switch to new ocl kernels for softmax layer

Signed-off-by: Li Peng <peng.li@intel.com>

* update softmax layer

on some platform subgroup extension may not work well,
fallback to non subgroup ocl acceleration.

Signed-off-by: Li Peng <peng.li@intel.com>

* fallback to cpu path for fc layer with multi output

Signed-off-by: Li Peng <peng.li@intel.com>

* update output message

Signed-off-by: Li Peng <peng.li@intel.com>

* update fully connected layer

fallback to gemm API if libdnn return false

Signed-off-by: Li Peng <peng.li@intel.com>

* Add ReLU OCL implementation

* disable layer fusion for now

Signed-off-by: Li Peng <peng.li@intel.com>

* Add OCL implementation for concat layer

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>

* libdnn: update license and copyrights

Also refine libdnn coding style

Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Li Peng <peng.li@intel.com>

* DNN: Don't link OpenCL library explicitly

* DNN: Make default preferableTarget to DNN_TARGET_CPU

User should set it to DNN_TARGET_OPENCL explicitly if want to
use OpenCL acceleration.

Also don't fusion when using DNN_TARGET_OPENCL

* DNN: refine coding style

* Add getOpenCLErrorString

* DNN: Use int32_t/uint32_t instread of alias

* Use namespace ocl4dnn to include libdnn things

* remove extra copyTo in softmax ocl path

Signed-off-by: Li Peng <peng.li@intel.com>

* update ReLU layer ocl path

Signed-off-by: Li Peng <peng.li@intel.com>

* Add prefer target property for layer class

It is used to indicate the target for layer forwarding,
either the default CPU target or OCL target.

Signed-off-by: Li Peng <peng.li@intel.com>

* Add cl_event based timer for cv::ocl

* Rename libdnn to ocl4dnn

Signed-off-by: Li Peng <peng.li@intel.com>
Signed-off-by: wzw <zhiwen.wu@intel.com>

* use UMat for ocl4dnn internal buffer

Remove allocateMemory which use clCreateBuffer directly

Signed-off-by: Li Peng <peng.li@intel.com>
Signed-off-by: wzw <zhiwen.wu@intel.com>

* enable buffer gemm in ocl4dnn innerproduct

Signed-off-by: Li Peng <peng.li@intel.com>

* replace int_tp globally for ocl4dnn kernels.

Signed-off-by: wzw <zhiwen.wu@intel.com>
Signed-off-by: Li Peng <peng.li@intel.com>

* create UMat for layer params

Signed-off-by: Li Peng <peng.li@intel.com>

* update sign ocl kernel

Signed-off-by: Li Peng <peng.li@intel.com>

* update image based gemm of inner product layer

Signed-off-by: Li Peng <peng.li@intel.com>

* remove buffer gemm of inner product layer

call cv::gemm API instead

Signed-off-by: Li Peng <peng.li@intel.com>

* change ocl4dnn forward parameter to UMat

Signed-off-by: Li Peng <peng.li@intel.com>

* Refine auto-tuning mechanism.

- Use OPENCV_OCL4DNN_KERNEL_CONFIG_PATH to set cache directory
  for fine-tuned kernel configuration.
  e.g. export OPENCV_OCL4DNN_KERNEL_CONFIG_PATH=/home/tmp,
  the cache directory will be /home/tmp/spatialkernels/ on Linux.

- Define environment OPENCV_OCL4DNN_ENABLE_AUTO_TUNING to enable
  auto-tuning.

- OPENCV_OPENCL_ENABLE_PROFILING is only used to enable profiling
  for OpenCL command queue. This fix basic kernel get wrong running
  time, i.e. 0ms.

- If creating cache directory failed, disable auto-tuning.

* Detect and create cache dir on windows

Signed-off-by: Li Peng <peng.li@intel.com>

* Refine gemm like convolution kernel.

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix redundant swizzleWeights calling when use cached kernel config.

* Fix "out of resource" bug when auto-tuning too many kernels.

* replace cl_mem with UMat in ocl4dnnConvSpatial class

* OCL4DNN: reduce the tuning kernel candidate.

This patch could reduce 75% of the tuning candidates with less
than 2% performance impact for the final result.

Signed-off-by: Zhigang Gong <zhigang.gong@intel.com>

* replace cl_mem with umat in ocl4dnn convolution

Signed-off-by: Li Peng <peng.li@intel.com>

* remove weight_image_ of ocl4dnn inner product

Actually it is unused in the computation

Signed-off-by: Li Peng <peng.li@intel.com>

* Various fixes for ocl4dnn

1. OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel())
2. Ptr<OCL4DNNInnerProduct<float> > innerProductOp
3. Code comments cleanup
4. ignore check on OCL cpu device

Signed-off-by: Li Peng <peng.li@intel.com>

* add build option for log softmax

Signed-off-by: Li Peng <peng.li@intel.com>

* remove unused ocl kernels in ocl4dnn

Signed-off-by: Li Peng <peng.li@intel.com>

* replace ocl4dnnSet with opencv setTo

Signed-off-by: Li Peng <peng.li@intel.com>

* replace ALIGN with cv::alignSize

Signed-off-by: Li Peng <peng.li@intel.com>

* check kernel build options

Signed-off-by: Li Peng <peng.li@intel.com>

* Handle program compilation fail properly.

* Use std::numeric_limits<float>::infinity() for large float number

* check ocl4dnn kernel compilation result

Signed-off-by: Li Peng <peng.li@intel.com>

* remove unused ctx_id

Signed-off-by: Li Peng <peng.li@intel.com>

* change clEnqueueNDRangeKernel to kernel.run()

Signed-off-by: Li Peng <peng.li@intel.com>

* change cl_mem to UMat in image based gemm

Signed-off-by: Li Peng <peng.li@intel.com>

* check intel subgroup support for lrn and pooling layer

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix convolution bug if group is greater than 1

Signed-off-by: Li Peng <peng.li@intel.com>

* Set default layer preferableTarget to be DNN_TARGET_CPU

Signed-off-by: Li Peng <peng.li@intel.com>

* Add ocl perf test for convolution

Signed-off-by: Li Peng <peng.li@intel.com>

* Add more ocl accuracy test

Signed-off-by: Li Peng <peng.li@intel.com>

* replace cl_image with ocl::Image2D

Signed-off-by: Li Peng <peng.li@intel.com>

* Fix build failure in elementwise layer

Signed-off-by: Li Peng <peng.li@intel.com>

* use getUMat() to get blob data

Signed-off-by: Li Peng <peng.li@intel.com>

* replace cl_mem handle with ocl::KernelArg

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(build): don't use C++11, OPENCL_LIBRARIES fix

* dnn(ocl4dnn): remove unused OpenCL kernels

* dnn(ocl4dnn): extract OpenCL code into .cl files

* dnn(ocl4dnn): refine auto-tuning

Defaultly disable auto-tuning, set OPENCV_OCL4DNN_ENABLE_AUTO_TUNING
environment variable to enable it.

Use a set of pre-tuned configs as default config if auto-tuning is disabled.
These configs are tuned for Intel GPU with 48/72 EUs, and for googlenet,
AlexNet, ResNet-50

If default config is not suitable, use the first available kernel config
from the candidates. Candidate priority from high to low is gemm like kernel,
IDLF kernel, basick kernel.

* dnn(ocl4dnn): pooling doesn't use OpenCL subgroups

* dnn(ocl4dnn): fix perf test

OpenCV has default 3sec time limit for each performance test.
Warmup OpenCL backend outside of perf measurement loop.

* use ocl::KernelArg as much as possible

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): fix bias bug for gemm like kernel

* dnn(ocl4dnn): wrap cl_mem into UMat

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): Refine signature of kernel config

- Use more readable string as signture of kernel config
- Don't count device name and vendor in signature string
- Default kernel configurations are tuned for Intel GPU with
  24/48/72 EUs, and for googlenet, AlexNet, ResNet-50 net model.

* dnn(ocl4dnn): swap width/height in configuration

* dnn(ocl4dnn): enable configs for Intel OpenCL runtime only

* core: make configuration helper functions accessible from non-core modules

* dnn(ocl4dnn): update kernel auto-tuning behavior

Avoid unwanted creation of directories

* dnn(ocl4dnn): simplify kernel to workaround OpenCL compiler crash

* dnn(ocl4dnn): remove redundant code

* dnn(ocl4dnn): Add more clear message for simd size dismatch.

* dnn(ocl4dnn): add const to const argument

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): force compiler use a specific SIMD size for IDLF kernel

* dnn(ocl4dnn): drop unused tuneLocalSize()

* dnn(ocl4dnn): specify OpenCL queue for Timer and convolve() method

* dnn(ocl4dnn): sanitize file names used for cache

* dnn(perf): enable Network tests with OpenCL

* dnn(ocl4dnn/conv): drop computeGlobalSize()

* dnn(ocl4dnn/conv): drop unused fields

* dnn(ocl4dnn/conv): simplify ctor

* dnn(ocl4dnn/conv): refactor kernelConfig localSize=NULL

* dnn(ocl4dnn/conv): drop unsupported double / untested half types

* dnn(ocl4dnn/conv): drop unused variable

* dnn(ocl4dnn/conv): alignSize/divUp

* dnn(ocl4dnn/conv): use enum values

* dnn(ocl4dnn): drop unused innerproduct variable

Signed-off-by: Li Peng <peng.li@intel.com>

* dnn(ocl4dnn): add an generic function to check cl option support

* dnn(ocl4dnn): run softmax subgroup version kernel first

Signed-off-by: Li Peng <peng.li@intel.com>
2017-10-02 15:38:00 +03:00
Pavel Vlasov
a57718e1ac ICV2017u3 package update;
- Optimizations set change. Now IPP integrations will provide code for SSE42, AVX2 and AVX512 (SKX) CPUs only. For HW below SSE42 IPP code is disabled.
- Performance regressions fixes for IPP code paths;
- cv::boxFilter integration improvement;
- cv::filter2D integration improvement;
2017-08-23 14:24:43 +03:00
Alexander Alekhin
b46e741c95 core(alloc): drop unused code, use memalign() functions instead of hacks
valgrind provides better detection without memory buffer hacks
2017-07-27 18:10:41 +03:00
Alexander Alekhin
602f047fe8 build: replace WIN32 => _WIN32 2017-07-25 13:30:48 +03:00
Alexander Alekhin
006966e629 trace: initial support for code trace 2017-06-26 17:07:13 +03:00
Alexander Alekhin
0213b508dc Merge pull request #8868 from alalek:fix_build_softfloat 2017-06-08 20:22:53 +00:00
Maksim Shabunin
f71ea4dfe9 Merge pull request #8816 from mshabunin:sprintf-fix
Fixed snprintf for VS 2013 (#8816)

* Fixed snprintf for VS 2013

* snprintf: removed declaration from header, changed implementation

* cv_snprintf corrected according to comments

* update snprintf patch
2017-06-08 21:53:16 +02:00
Alexander Alekhin
71517a910a build: fix errors for MSVS2010-2013, reduce default softfloat scope 2017-06-08 01:09:21 +00:00
Alexander Alekhin
125abe2fe4 Merge pull request #8838 from tomoaki0705:dispatchFp16 2017-06-06 15:31:42 +00:00
Tomoaki Teshima
e269ef96cb update convertFp16 using CV_CPU_CALL_FP16
* avoid link error (move the implementation of software version to header)
 * make getConvertFuncFp16 local (move from precomp.hpp to convert.hpp)
 * fix error on 32bit x86
2017-06-06 22:26:51 +09:00
Rostislav Vasilikhin
c6a3a18894 SoftFloat integrated (#8668)
* everything is put into softfloat.cpp and softfloat.hpp

* WIP: try to integrate softfloat into OpenCV

* extra functions removed

* softfloat made stateless

* CV_EXPORTS added

* operators fixed

* exp added, log: WIP

* log32 fixed

* shorter names; a lot of TODOs

* log64 rewritten

* cbrt32 added

* minors, refactoring

* "inline" -> "CV_INLINE"

* cast to bool warnings fixed

* several warnings fixed

* fixed warning about unsigned unary minus

* fixed warnings on type cast

* inline -> CV_INLINE

* special cases processing added (NaNs, Infs, etc.)

* constants for NaN and Inf added

* more macros and helper functions added

* added (or fixed) tests for pow32, pow64, cbrt32

* exp-like functions fixed

* minor changes

* fixed random number generation for tests

* tests for exp32 and exp64: values are compared to SoftFloat-based naive implementation

* minor warning fix

* pow(f, i) 32/64: special cases handling added

* unused functions removed

* refactoring is in progress (not compiling)

* CV_inline added

* unions {uint_t, float_t} removed

* tests compilation fixed

* static const members -> static methods returning const

* reinterpret_cast

* warning fixed

* const-ness fixed

* all FP calculations (even compile-time) are done in SoftFloat + minor fixes

* pow(f, i) removed from interface (can cause incorrect cast) to internals of pow(f, f), tests fixed

* CV_INLINE -> inline

* internal constants moved to .cpp file

* toInt_minMag() methods merged into toInt() methods

* macros moved to .cpp file

* refactoring: types renamed to softfloat and softdouble; explicit constructors, etc.

* toFloat(), toDouble() -> operator float(), operator double()

* removed f32/f64 prefixes from functions names

* toType() methods removed, round() and trunc() functions added

* minor change

* minors

* MSVC: warnings fixed

* added int cvRound(), cvFloor, cvCeil, cvTrunc, saturate_cast<T>()

* typo fixed

* type cast fixed
2017-05-29 17:07:25 +03:00
KUANG, Fangjun
be7d4608fb Add more comments to the members of CoreTLSData related to OpenCL. 2017-03-12 16:13:00 +01:00
Tomoaki Teshima
d0bdf99501 check correct flag 2017-01-27 18:42:58 +09:00
apavlenko
1e2ddc30b1 Canny via OpenVX, Node wrapper extended (query/set attribute), some naming fixes 2016-12-09 14:53:06 +03:00
Tomoaki Teshima
87d0c91dcf fix warning of build 2016-06-09 18:24:00 +09:00
Tomoaki Teshima
b2ad7cd9c0 add feature to convert FP32(float) to FP16(half)
* check compiler support
  * check HW support before executing
  * add test doing round trip conversion from / to FP32
  * treat array correctly if size is not multiple of 4
  * add declaration to prevent warning
  * make it possible to enable fp16 on 32bit ARM
  * let the conversion possible on non-supported HW, too.
  * add test using both HW and SW implementation
2016-05-21 21:31:33 +09:00
Maksim Shabunin
84f37d352f HAL moved back to core 2015-12-17 12:33:23 +03:00
Maksim Shabunin
b4bcdd10a1 HAL: improvements
- added new functions from core module: split, merge, add, sub, mul, div, ...
- added function replacement mechanism
- added example of HAL replacement library
2015-12-03 14:43:37 +03:00
Vadim Pisarevsky
3942b1f362 Merge pull request #5340 from alalek:ocl_off 2015-11-10 16:53:36 +00:00