Vectorize minMaxIdx functions
* Updated documentation and intrinsic tests for v_reduce
* Add other files back in from the forced push
* Prevent an constant overflow with v_reduce for int8 type
* Another alternative to fix constant overflow warning.
* Fix another compiler warning.
* Update comments and change comparison form to be consistent with other vectorized loops.
* Change return type of v_reduce_min & max for v_uint8 and v_uint16 to be same as lane type.
* Cast v_reduce functions to int to avoid overflow. Reduce number of parameters in MINMAXIDX_REDUCE macro.
* Restore cast type for v_reduce_min & max to LaneType
support eltwise sum with different number of input channels in CUDA backend
* add shortcut primitive
* add offsets in shortcut kernel
* skip tests involving more than two inputs
* remove redundant modulus operation
* support multiple inputs
* remove whole file indentation
* skip acc in0 trunc test if weighted
* use shortcut iff channels are unequal
Enable cuda4dnn on hardware without support for __half
* Enable cuda4dnn on hardware without support for half (ie. compute capability < 5.3)
Update CMakeLists.txt
Lowered minimum CC to 3.0
* UPD: added ifdef on new copy kernel
* added fp16 support detection at runtime
* Clarified #if condition on atomicAdd definition
* More explicit CMake error message
Fix implicit conversion from array to scalar in python bindings
* Fix wrong conversion behavior for primitive types
- Introduce ArgTypeInfo namedtuple instead of plain tuple.
If strict conversion parameter for type is set to true, it is
handled like object argument in PyArg_ParseTupleAndKeywords and
converted to concrete type with the appropriate pyopencv_to function
call.
- Remove deadcode and unused variables.
- Fix implicit conversion from numpy array with 1 element to scalar
- Fix narrowing conversion to size_t type.
* Fix wrong conversion behavior for primitive types
- Introduce ArgTypeInfo namedtuple instead of plain tuple.
If strict conversion parameter for type is set to true, it is
handled like object argument in PyArg_ParseTupleAndKeywords and
converted to concrete type with the appropriate pyopencv_to function
call.
- Remove deadcode and unused variables.
- Fix implicit conversion from numpy array with 1 element to scalar
- Fix narrowing conversion to size_t type.·
- Enable tests with wrong conversion behavior
- Restrict passing None as value
- Restrict bool to integer/floating types conversion
* Add PyIntType support for Python 2
* Remove possible narrowing conversion of size_t
* Bindings conversion update
- Remove unused macro
- Add better conversion for types to numpy types descriptors
- Add argument name to fail messages
- NoneType treated as a valid argument. Better handling will be added
as a standalone patch
* Add descriptor specialization for size_t
* Add check for signed to unsigned integer conversion safety
- If signed integer is positive it can be safely converted
to unsigned
- Add check for plain python 2 objects
- Add check for numpy scalars
- Add simple type_traits implementation for better code style
* Resolve type "overflow" false negative in safe casting check
- Move type_traits to separate header
* Add copyright message to type_traits.hpp
* Limit conversion scope for integral numpy types
- Made canBeSafelyCasted specialized only for size_t, so
type_traits header became unused and was removed.
- Added clarification about descriptor pointer
Add lightweight IE hardware targets checks
nGraph: Concat with paddings
Enable more nGraph tests
Restore FP32->FP16 for GPU plugin of IE
try to fix buildbot
Use lightweight IE targets check only starts from R4
- some of `icvCvt_BGR*` functions have R with B channels
swapped what leads to the wrong conversion
- renames misleading `rgb` variable name to `bgr`
- swap back the conversion coefficients, `cB` should be the first
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
Actually, we can do this in constant time. xofs always
contains same or increasing offset values. We can instead
find the most extreme value used and never attempt to load it.
Similarly, we can note for all dx >= 0 and dx < (dwidth - cn)
where xofs[dx] + cn < xofs[dwidth-cn] implies dx < (dwidth - cn).
Thus, we can use this to control our loop termination optimally.
This fixes#16137 with little or no performance impact. I have
also added a debug check as a sanity check.
* add cv::compare test when Mat type == CV_16F
* add assertion in cv::compare when src.depth() == CV_16F
* cv::compare assertion minor fix
* core: add more checks
* enable tests for DNN_TARGET_CUDA_FP16
* disable deconvolution tests
* disable shortcut tests
* fix typos and some minor changes
* dnn(test): skip CUDA FP16 test too (run_pool_max)