* [build][option] Introduce `OPENCV_DISABLE_THREAD_SUPPORT` option.
The option forces the library to build without thread support.
* update handling of OPENCV_DISABLE_THREAD_SUPPORT
- reduce amount of #if conditions
* [to squash] cmake: apply mode vars in toolchains too
Co-authored-by: Alexander Alekhin <alexander.a.alekhin@gmail.com>
There can be an int overflow.
cv::norm( InputArray _src, int normType, InputArray _mask ) is fine,
not cv::norm( InputArray _src1, InputArray _src2, int normType, InputArray _mask ).
Improve performance on Arm64
* Improve performance on Apple silicon
This patch will
- Enable dot product intrinsics for macOS arm64 builds
- Enable for macOS arm64 builds
- Improve HAL primitives
- reduction (sum, min, max, sad)
- signmask
- mul_expand
- check_any / check_all
Results on a M1 Macbook Pro
* Updates to #20011 based on feedback
- Removes Apple Silicon specific workarounds
- Makes #ifdef sections smaller for v_mul_expand cases
- Moves dot product optimization to compiler optimization check
- Adds 4x4 matrix transpose optimization
* Remove dotprod and fix v_transpose
Based on the latest, we've removed dotprod entirely and will revisit in a future PR.
Added explicit cats with v_transpose4x4()
This should resolve all opens with this PR
* Remove commented out lines
Remove two extraneous comments
* Adding functions rbegin() and rend() functions to matrix class.
This is important to be more standard compliant with C++ and an ever increasing number of people using standard algorithms for better code readability- and maintainability.
The functions are copy pated from their counterparts (even though they should probably call the counterparts but this gave me some troube).
They return iterators using std::reverse_iterators
Follow up of an open feature request:
https://github.com/opencv/opencv/issues/4641
* Fix rbegin() and rend() and provide tests for them
* Removing unnecessary whitespaces
* Adding rbegin and rend to Mat_ class with the right parameters so we don't need to repeat the template argument.
An instantiating cv::Mat_<int> for example can call it's rbegin() function and doesn't need rbegin<int>() with this convience addition.
Follows what is done for forward iterators
* static cast the vector size (return size_t) to an int (that is required for opencv mat constructor)
Co-authored-by: Stefan <stefan.gerl@tum.de>
* implements https://github.com/opencv/opencv/issues/19147
* CAUTION: this PR will only functions safely in the
4+ branches that already include PR 19029
* CAUTION: this PR requires thread-safe startup of the alloc.cpp
translation unit as implemented in PR 19029
Ordinary quaternion
* version 1.0
* add assumeUnit;
add UnitTest;
check boundary value;
fix the func using method: func(obj);
fix 4x4;
add rodrigues vector transformation;
fix mat to quat;
* fix blank and tab
* fix blank and tab
modify test;cpp to hpp
* mainly improve comment;
add rvec2Quat;fix toRodrigues;
fix throw to CV_Error
* fix bug of quatd * int;
combine hpp and cpp;
fix << overload error in win system;
modify include in test file;
* move implementation to quaternion.ini.hpp;
change some constructor to createFrom* function;
change Rodrigues vector to rotation vector;
change the matexpr to mat of 3x3 return type;
improve comments;
* try fix log function error in win
* add enums for assumeUnit;
improve docs;
add using std::cos funcs
* remove using std::* from header;
add std::* in affine.hpp,warpers_inl.hpp;
* quat: coding style
* quat: AssumeType => QuatAssumeType
Added clapack
* bring a small subset of Lapack, automatically converted to C, into OpenCV
* added missing lsame_ prototype
* * small fix in make_clapack script
* trying to fix remaining CI problems
* fixed character arrays' initializers
* get rid of F2C_STR_MAX
* * added back single-precision versions for QR, LU and Cholesky decompositions. It adds very little extra overhead.
* added stub version of sdesdd.
* uncommented calls to all the single-precision Lapack functions from opencv/core/src/hal_internal.cpp.
* fixed warning from Visual Studio + cleaned f2c runtime a bit
* * regenerated Lapack w/o forward declarations of intrinsic functions (such as sqrt(), r_cnjg() etc.)
* at once, trailing whitespaces are removed from the generated sources, just in case
* since there is no declarations of intrinsic functions anymore, we could turn some of them into inline functions
* trying to eliminate the crash on ARM
* fixed API and semantics of s_copy
* * CLapack has been tested successfully. It's now time to restore the standard LAPACK detection procedure
* removed some more trailing whitespaces
* * retained only the essential stuff in CLapack
* added checks to lapack calls to gracefully return "not implemented" instead of returning invalid results with "ok" status
* disabled warning when building lapack
* cmake: update LAPACK detection
Co-authored-by: Alexander Alekhin <alexander.a.alekhin@gmail.com>
- OpenCL kernel cleanup processing is asynchronous and can be called even after forced clFinish()
- buffers are released later in asynchronous mode
- silence these false positive cases for asynchronous cleanup
* add eigen tensor conversion functions
* add eigen tensor conversion tests
* add support for column major order
* update eigen tensor tests
* fix coding style and add conditional compilation
* fix conditional compilation checks
* remove whitespace
* rearrange functions for easier reading
* reformat function documentation and add tensormap unit test
* cleanup documentation of unit test
* remove condition duplication
* check Eigen major version, not minor version
* restrict to Eigen v3.3.0+
* add documentation note and add type checking to cv2eigen_tensormap()
trying to fix handling file storages with extremely long lines
* trying to fix handling of file storages with extremely long lines: https://github.com/opencv/opencv/issues/11061
* * fixed errorneous pointer access in JSON parser.
* it's now crash-test time! temporarily set the initial parser buffer size to just 40 bytes. let's run all the test and check if the buffer is always correctly resized and handled
* fixed pointer use in JSON parser; added the proper test to catch this case
* fixed the test to make it more challenging. generate test json with
*
**
***
etc. shape
Vectorize minMaxIdx functions
* Updated documentation and intrinsic tests for v_reduce
* Add other files back in from the forced push
* Prevent an constant overflow with v_reduce for int8 type
* Another alternative to fix constant overflow warning.
* Fix another compiler warning.
* Update comments and change comparison form to be consistent with other vectorized loops.
* Change return type of v_reduce_min & max for v_uint8 and v_uint16 to be same as lane type.
* Cast v_reduce functions to int to avoid overflow. Reduce number of parameters in MINMAXIDX_REDUCE macro.
* Restore cast type for v_reduce_min & max to LaneType
* add cv::compare test when Mat type == CV_16F
* add assertion in cv::compare when src.depth() == CV_16F
* cv::compare assertion minor fix
* core: add more checks
Add checks for empty operands in Matrix expressions that don't check properly
* Starting to add checks for empty operands in Matrix expressions that
don't check properly.
* Adding checks and delcarations for checker functions
* Fix signatures and add checks for each class of Matrix Expr operation
* Make it catch the right exception
* Don't expose helper functions to public API
Improving VSX performance of integral function
* Adding support for vector get function on VSX datatypes so the
integral function gains a bit of performance.
* Removing get as a datatype member function and implementing a new HAL
instruction v_extract_n to get the n-th element of a vector register.
* Adding SSE/NEON/AVX intrinsics.
* Implement new HAL instruction v_broadcast_element on VSX/AVX/NEON/SSE.
* core(simd): add tests for v_extract_n/v_broadcast_element
- updated docs
- commented out code to repair compilation
- added WASM and MSA default implementations
* core(simd): fix compilation
- x86: avoid _mm256_extract_epi64/32/16/8 with MSVS 2015
- x86: _mm_extract_epi64 is 64-bit only
* cleanup
- move TLS & instrumentation code out of core/utility.hpp
- (*) TLSData lost .gather() method (to dispose thread data on thread termination)
- use TLSDataAccumulator for reliable collecting of thread data
- prefer using of .detachData() + .cleanupDetachedData() instead of .gather() method
(*) API is broken: replace TLSData => TLSDataAccumulator if gather required
(objects disposal on threads termination is not available in accumulator mode)
Fixing bug with comparison of v_int64x2 or v_uint64x2
* Casting v_uint64x2 to v_float64x2 and comparing does NOT work in all cases. Rewrite using epi64 instructions - faster too.
* Fix bad merge.
* Fix equal comparsion for non-SSE4.1. Add test cases for v_int64x2 comparisons.
* Try to fix merge conflict.
* Only test v_int64x2 comparisons if CV_SIMD_64F
* Fix compiler warning.
* New v_reverse HAL intrinsic for reversing the ordering of a vector
* Fix conflict.
* Try to resolve conflict again.
* Try one more time.
* Add _MM_SHUFFLE. Remove non-vectorize code in SSE2. Fix copy and paste issue with NEON.
* Change v_uint16x8 SSE2 version to use shuffles
* core: rework and optimize SIMD implementation of dotProd
- add new universal intrinsics v_dotprod[int32], v_dotprod_expand[u&int8, u&int16, int32], v_cvt_f64(int64)
- add a boolean param for all v_dotprod&_expand intrinsics that change the behavior of addition order between
pairs in some platforms in order to reach the maximum optimization when the sum among all lanes is what only matters
- fix clang build on ppc64le
- support wide universal intrinsics for dotProd_32s
- remove raw SIMD and activate universal intrinsics for dotProd_8
- implement SIMD optimization for dotProd_s16&u16
- extend performance test data types of dotprod
- fix GCC VSX workaround of vec_mule and vec_mulo (in little-endian it must be swapped)
- optimize v_mul_expand(int32) on VSX
* core: remove boolean param from v_dotprod&_expand and implement v_dotprod_fast&v_dotprod_expand_fast
this changes made depend on "terfendail" review
Add a basic sanity test to verify the rounding functions
work as expected.
Likewise, extend the rounding performance test to cover the
additional float -> int fast math functions.