ISA 2.07 (aka POWER8) effectively extended the expanding multiply
operation to word types. The altivec intrinsics prior to gcc 8 did
not get the update.
Workaround this deficiency similar to other fixes.
This was exposed by commit 33fb253a66
which leverages the int -> dword expanding multiply.
This fixes Issue #15506
* Adding all possible data type interactions to the perf tests since some
use SIMD acceleration and others do not.
* Disabling full tests by default.
* Giving proper names, removing magic numbers and sanity checks of new
performance tests for the integral function.
* Giving proper names, making array static.
* - headers in "infer/" and "infer/ie/" folders are included into gapi_ext_hdrs;
+ because of that a few #includes are required in the headers
- HAVE_INF_ENGINE flag check in headers "infer/ie.hpp" and "infer/ie/util.hpp" is deleted
* - the "ie/util.hpp" header is a private header now as it's used for tests; it's been moved to the scr directory to the place next to the implementation file "ie/giebackend.cpp"
- the path to this header in files "ie/giebackend.cpp" and "test/infer/gapi_infer_ie_test.cpp" is updated
- As it's private header now and explicitly depends on IE, the "HAVE_INF_ENGINE" flag check is returned
* Support GArray as input in fluid kernels
* Create tests on GArray input in fluid
* Some fixes to fully support GArray
* Refactor code and change the kernel according to review
* Add histogram calculation as a G-API kernel
Add assert that input GArgs in fluid contain at least one GMat
* Convert ImgWarp from SSE SIMD to HAL - 2.8x faster on Power (VSX) and 15% speedup on x86
* Change compile flag from CV_SIMD128 to CV_SIMD128_64F for use of v_float64x2 type
* Changing WarpPerspectiveLine from class functions and dispatching to static functions.
* Re-add dynamic runtime and dispatch execution.
* RRestore SSE4_1 optimizations inside opt_SSE4_1 namespace
* Convert lkpyramid from SSE SIMD to HAL - 90% faster on Power (VSX).
* Replace stores with reduce_sum. Rework to handle endianess correctly.
* Fix compiler warnings by casting values explicitly to shorts
* Switch to CV_SIMD128 compiler definition. Unroll loop to 8 elements since we've already loaded the data.
Detected by clang trunk:
```
opencv/modules/core/src/ocl.cpp:4337:37: warning: object backing the pointer will be destroyed at the end of the full-expression [-Wdangling]
CV_OCL_CHECK_RESULT(retval, cv::format("clCreateBuffer(capacity=%lld) => %p", (long long int)entry.capacity_, (void*)entry.clBuffer_).c_str());
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
opencv/modules/core/src/ocl.cpp:193:42: note: expanded from macro 'CV_OCL_CHECK_RESULT'
if (0) { const char* msg_ = (msg); CV_UNUSED(msg_); /* ensure const char* type (cv::String without c_str()) */ } \
```
because `cv::format` yields a temporary std::string, and thus `msg_` points to a destroyed buffer.
* Fix the detection of XIMEA on Windows (when it has been installed by another user with administrative privileges, for example).
* Change the flow: we first try HKEY_CURRENT_USER key and, if empty, then try HKEY_LOCAL_MACHINE
Use 4x FMA chains to sum on SIMD 128 FP64 targets. On
x86 this showed about 1.4x improvement.
For PPC, do a full multiply (32x32->64b), convert to DP
then accumulate. This may be slightly less precise for
some inputs. But is 1.5x faster than the above which
is about 1.5x than the FMA above for ~2.5x speedup.
* in embindgen.py added inpaint function
* added test for inpaint function and fixed function in build_js
* fixed test for inpaint function
* rotate deleted, build_js.py fixed
G-API: Fix Journal usage in Fluid backend (#15238)
* Fix Journal usage in Fluid backend
* Delete dumpDotRequired(): invalid check
* Update mem consumption test
* Test that new test works
* Debug memory consumption function
* Increase iterations in test
* Re-write memory consumption measurement part
* Restore correct fix for Fluid journals