* Allow the number of threads FFMpeg uses to be selected during VideoCapture::open().
Reset interupt timer in grab if
err = avformat_find_stream_info(ic, NULL);
is interupted but open is successful.
* Correct the returned number of threads and amend test cases.
* Update container test case.
* Reverse changes added to existing videoio_container test case and include test combining thread change and raw read in the newly added videoio_read test case.
In some situations the last value was missing from the discrete theta
values. Now, the last value is chosen such that it is close to the
user-provided maximum theta, while the distance to pi remains always
at least theta_step/2. This should avoid duplicate detections.
A better way would probably be to use max_theta as is and adjust the
resolution (theta_step) instead, such that the discretization would
always be uniform (in a circular sense) when full angle range is used.
This fixes the following error with mingw toolchain:
opencv/modules/videoio/src/cap_msmf.cpp:1020: error: 'wstring_convert' is not a member of 'std'
1020 | std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
opencv/modules/videoio/src/cap_ffmpeg_hw.hpp:230:26: error: 'wstring_convert' is not a member of 'std'
230 | std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
The locale header is required according to C++ standard.
See https://en.cppreference.com/w/cpp/locale/wstring_convert
This fixes the following error with mingw toolchain:
opencv/modules/videoio/src/cap_obsensor/obsensor_stream_channel_msmf.hpp:160:10: error: 'condition_variable' in namespace 'std' does not name a type
160 | std::condition_variable streamStateCv_;
| ^~~~~~~~~~~~~~~~~~
libstdc++ that comes with gcc 4.8 doesn't
define `getline(basic_istream<char>&&, std::string&)`
even if it's part of the c++11 standard.
However we can still use the following:
`getline(basic_istream<char>&, std::string&)`.
* videoio: add support for obsensor (Orbbec RGB-D Camera )
* obsensor: code format issues fixed and some code optimized
* obsensor: fix typo and format issues
* obsensor: fix crosses initialization error
[GSoC] New universal intrinsic backend for RVV
* Add new rvv backend (partially implemented).
* Modify the framework of Universal Intrinsic.
* Add CV_SIMD macro guards to current UI code.
* Use vlanes() instead of nlanes.
* Modify the UI test.
* Enable the new RVV (scalable) backend.
* Remove whitespace.
* Rename and some others modify.
* Update intrin.hpp but still not work on AVX/SSE
* Update conditional compilation macros.
* Use static variable for vlanes.
* Use max_nlanes for array defining.
Reimplementation of Element-wise layers with broadcasting support
* init
* semi-working initial version
* add small_vector
* wip
* remove smallvec
* add nary function
* replace auto with Mat in lambda expr used in transform
* uncomment asserts
* autobuffer shape_buf & step_buf
* fix a missing bracket
* fixed a missing addLayer in parseElementWise
* solve one-dimensional broadcast
* remove pre_broadcast_transform for the case of two constants; fix missing constBlobsExtraInfo when addConstant is called
* one autobuffer for step & shape
* temporal fix for the missing original dimension information
* fix parseUnsqueeze when it gets a 1d tensor constant
* support sum/mean/min/max with only one input
* reuse old code to handle cases of two non-constant inputs
* add condition to handle div & mul of two non-constant inputs
* use || instead of or
* remove trainling spaces
* enlarge buf in binary_forward to contain other buffer
* use autobuffer in nary_forward
* generate data randomly and add more cases for perf
* add op and, or & xor
* update perf_dnn
* remove some comments
* remove legacy; add two ONNX conformance tests in filter
* move from cpu_denylist to all_denylist
* adjust parsing for inputs>=2
Co-authored-by: fengyuentau <yuantao.feng@opencv.org.cn>
- Add conditional compilation directives to replace deprecated std::random_shuffle with new std::shuffle when C++11 is available.
- Set random seed to a fixed value before shuffling containers to ensure reproducibility.
Resolvesopencv/opencv#22209.
Add conditional compilation directives to enable uses of std::chrono on supported compilers. Use std::chrono::steady_clock as a source to retrieve current tick count and clock frequency.
Fixesopencv/opencv#6902.
Add per_tensor_quantize to int8 quantize
* add per_tensor_quantize to dnn int8 module.
* change api flag from perTensor to perChannel, and recognize quantize type and onnx importer.
* change the default to hpp
It's not clear how ranges argument should be used in the overload of
calcHist that accepts std::vector. The main overload uses array of
arrays there, while std::vector overload uses a plain array. The code
interprets the vector as a flattened array and rebuilds array of arrays
from it. This is not obvious interpretation, so documentation has been
added to explain the expected usage.
DNN: Accelerating convolution
* Fast Conv of ARM, X86 and universal intrinsics.
* improve code style.
* error fixed.
* improve the License
* optimize memory allocated and Adjust the threshold.
* change FasterRCNN_vgg16 to 2GB memory.
-enable using -DWITH_WAYLAND=ON
-adapted from https://github.com/pfpacket/opencv-wayland
-using xdg_shell stable protocol
-overrides HAVE_QT if HAVE_WAYLAND and WITH_WAYLAND are set
Signed-off-by: Joel Winarske <joel.winarske@gmail.com>
Co-authored-by: Ryo Munakata <afpacket@gmail.com>
Replaced sprintf with safer snprintf
* Straightforward replacement of sprintf with safer snprintf
* Trickier replacement of sprintf with safer snprintf
Some functions were changed to take another parameter: the size of the buffer, so that they can pass that size on to snprintf.
Fix issue 22015, let Clip layer support 1-3 inputs
* Fix issue 22015.
Let layer Clip support 1-3 inputs.
* Resolve other problems caused by modifications
* Update onnx_importer.cpp
added extra checks to min/max handling in Clip
* Add assertions to check the size of the input
* Add test for clip with min and max initializers
* Separate test for "clip_init_min_max". Change the check method for input_size to provide a clearer message in case of problem.
* Add tests for clip with min or max initializers
* Change the implementation of getting input
Co-authored-by: Vadim Pisarevsky <vadim.pisarevsky@gmail.com>
Fix sampling for version multiplying factor
* reduce experimentalFrequencyElem and listFrequencyElem
* fix large resize
* fix tile in postIntermediate
* add getMinSideLen(), add corrected_index
* add test decode_regression_21929 author Kumataro, add test decode_regression_version_25
* objdetect: qrcode_encoder: fix to missing timing pattern
* objdetect: qrcode_encoder: Add SCOPED_TRACE() and replace CV_Assert() to ASSERT_EQ().
- Add SCOPED_TRACE() for version loop.
- Replace CV_Assert() to ASSERT_EQ().
- Rename expect_msg to msg.
Some GStreamer elements may produce buffers with very non
standard strides, offsets and/or even transport each plane
in different, non-contiguous pointers. This non-standard
layout is communicated via GstVideoMeta structures attached
to the buffers. Given this, when a GstVideoMeta is available,
one should parse the layout from it instead of generating
a generic one from the caps.
The GstVideoFrame utility does precisely this: if the buffer
contains a video meta, it uses that to fill the format and
memory layout. If there is no meta available, the layout is
inferred from the caps.
* Added support for 4B RGB V4L2 pixel formats
Added support for V4L2_PIX_FMT_XBGR32 and V4L2_PIX_FMT_ABGR32 pixel
formats.
* Added workaround for missing V4L2_PIX_FMT_ABGR32 and V4L2_PIX_FMT_XBGR32
defines
Fixes and optimizations for the SQPnP solver
* Fixes and optimizations
- optimized the calculation of qa_sum by moving equal elements outside the loop
- unrolled copying of the lower triangle of omega
- substituted SVD with eigendecomposition in the factorization of omega (2-3 times faster)
- fixed the initialization of lambda in FOAM
- added a cheirality test that checks a solution on all 3D points rather than on their mean. The old test rejected valid poses in some cases
- fixed some typos & errors in comments
* reverted to SVD
Eigen decomposition seems to yield larger errors in certain tests, reverted to SVD
* nearestRotationMatrixSVD
Added nearestRotationMatrixSVD()
Previous nearestRotationMatrix() renamed to nearestRotationMatrixFOAM() and reverts to nearestRotationMatrixSVD() for singular matrices
* fixed checks order
Fixed the order of checks in PoseSolver::solveInternal()
Add undistortImagePoints function
* Add undistortImagePoints function
undistortPoints has unclear interface and additional functionality. New function computes only undistorted image points position
* Add undistortImagePoints test
* Add TermCriteria
* Fix layout
If there will be measurement before the next predict, `statePost` would be assigned to updated value. So I guess these steps are meant to handle when no measurement and KF only do the predict step.
```cpp
statePre.copyTo(statePost);
errorCovPre.copyTo(errorCovPost);
```
In test_imgproc.js, the test_filter suite's last test assigns a variable
to `size` without declaring it with `let`, polluting the global scope.
This commit adds `let` to the statement, so that the variable is scoped
to the test block.
Add distort/undistort test for fisheye::undistortPoints()
* Add distort/undistort test for fisheye::undistortPoints()
Lack of test has allowed error described in 19138 to be unnoticed.
In addition to random points, four corners and principal center
added to point set
* Add random distortion coefficients set
* Move undistortPoints test to google test, refactor
* Add fisheye::undistortPoints() perf test
* Add negative distortion coefficients to undistortPoints test, increase value
* Move to theRNG()
* Change test check from cvtest::norm(L2) to EXPECT_MAT_NEAR()
* Layout fix
* Add points number parameters, comments
[GAPI] Support basic inference in OAK backend
* Combined commit which enables basic inference and other extra capabilities of OAK backend
* Remove unnecessary target options from the cmakelist
Fixed out-of-bounds read in parallel version of ippGaussianBlur()
* Fixed out-of-memory read in parallel version of ippGaussianBlur()
* Fixed check
* Revert changes in CMakeLists.txt
Fixed handling of new stream, especially for stateful OCV kernels
* Fixed handling of new stream, especially for stateful OCV kernels
* Removed duplication from StateInitOnce tests
* Addressed review comments for PR #21731
- Fixed explanation comments
- Expanded test for stateful OCV kernels in Regular mode
* Addressed review comments for PR #21731
- Moved notification about new stream to the constructor
- Added test on state reset for Regular mode
* Addresed review comments
* Addressed review comments
Co-authored-by: Ruslan Garnov <ruslan.garnov@intel.com>
python binding for matches and inliers_mask attributes of cv2.detail_MatchesInfo class
* making matches and inliers_mask attributes of cv2.detail_MatchesInfo class accessible from python interface
* binding test for cv2.detail_MatchesInfo class
[G-API] Handle exceptions in streaming executor
* Handle exceptions in streaming executor
* Rethrow exception in non-streaming executor
* Clean up
* Put more tests
* Handle exceptions in IE backend
* Handle exception in IE callbacks
* Handle exception in GExecutor
* Handle all exceptions in IE backend
* Not only (std::exception& e)
* Fix comments to review
* Handle input exception in generic way
* Fix comment
* Clean up
* Apply review comments
* Put more comments
* Fix alignment
* Move test outside of HAVE_NGRAPH
* Fix compilation
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/vsx_utils.hpp:352:12: warning: 'vec_permi' macro redefined [-Wmacro-redefined]
# define vec_permi(a, b, c) vec_xxpermdi(b, a, (3 ^ (((c) & 1) << 1 | (c) >> 1)))
^
/usr/lib/clang/13.0.0/include/altivec.h:13077:9: note: previous definition is here
#define vec_permi(__a, __b, __c) \
^
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/vsx_utils.hpp:370:25: error: redefinition of 'vec_promote'
VSX_FINLINE(vec_dword2) vec_promote(long long a, int b)
^
/usr/lib/clang/13.0.0/include/altivec.h:14604:1: note: previous definition is here
vec_promote(signed long long __a, int __b) {
^
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/vsx_utils.hpp:377:26: error: redefinition of 'vec_promote'
VSX_FINLINE(vec_udword2) vec_promote(unsigned long long a, int b)
^
/usr/lib/clang/13.0.0/include/altivec.h:14611:1: note: previous definition is here
vec_promote(unsigned long long __a, int __b) {
^
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/hal/intrin_vsx.hpp:1045:22: error: call to 'vec_rsqrt' is ambiguous
{ return v_float32x4(vec_rsqrt(x.val)); }
^~~~~~~~~
/usr/lib/clang/13.0.0/include/altivec.h:8472:34: note: candidate function
static vector float __ATTRS_o_ai vec_rsqrt(vector float __a) {
^
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/vsx_utils.hpp:362:29: note: candidate function
VSX_FINLINE(vec_float4) vec_rsqrt(const vec_float4& a)
^
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/hal/intrin_vsx.hpp:1047:22: error: call to 'vec_rsqrt' is ambiguous
{ return v_float64x2(vec_rsqrt(x.val)); }
^~~~~~~~~
/usr/lib/clang/13.0.0/include/altivec.h:8477:35: note: candidate function
static vector double __ATTRS_o_ai vec_rsqrt(vector double __a) {
^
/wrkdirs/usr/ports/graphics/opencv/work/opencv-4.5.5/modules/core/include/opencv2/core/vsx_utils.hpp:365:30: note: candidate function
VSX_FINLINE(vec_double2) vec_rsqrt(const vec_double2& a)
^
1 warning and 4 errors generated.
The specific functions were added to altivec.h in LLVM's 1ff93618e58df210def48d26878c20a1b414d900, c3da07d216dd20fbdb7302fd085c0a59e189ae3d and 10cc5bcd868c433f9a781aef82178b04e98bd098.
* better accuracy of _rotatedRectangleIntersection
instead of just migrating to double-precision (which would work), some computations are scaled by a factor that depends on the length of the smallest vectors.
There is a better accuracy even with floats, so this is certainly better for very sensitive cases
* Update intersection.cpp
use L2SQR norm to tune the numeric scale
* Update intersection.cpp
adapt samePointEps with L2 norm
* Update intersection.cpp
move comment
* Update intersection.cpp
fix wrong numericalScalingFactor usage
* added tests
* fixed warnings returned by buildbot
* modifications suggested by reviewer
renaming numericalScaleFctor to normalizationScale
refactor some computations
more "const"
* modifications as suggested by reviewer
Fix LSTM support in ONNX
* fix LSTM and add peephole support
* disable old tests
* turn lambdas into functions
* more hacks for c++98
* add assertions
* slice fixes
* backport of cuda-related fixes
* address review comments
Add 10-12-14bit (integer) TIFF decoding support
* Add 12bit (integer) TIFF decoding support
An (slow) unpacking step is inserted when the native bpp is not equal to the dst_bpp
Currently, I do not know if there can be several packing flavours in TIFF data.
* added tests
* move sample files to opencv_extra
* added 10b and 14b unpacking
* fix compilation for non MSVC compilers by using more standard typedefs
* yet another typdef usage change to fix buildbot Mac compilation
* fixed unpacking of partial packets
* fixed warnings returned by buildbot
* modifications as suggested by reviewer
* add apply softmax option to ClassificationModel
* remove default arguments of ClassificationModel::setSoftMax()
* fix build for python
* fix docs warning for setSoftMax()
* add impl for ClassficationModel()
* fix failed build for docs by trailing whitespace
* move to implement classify() to ClassificationModel_Impl
* move to implement softmax() to ClassificationModel_Impl
* remove softmax from public method in ClassificationModel
All classes are registered in the scope that corresponds to C++
namespace or exported class.
Example:
`cv::ml::Boost` is exported as `cv.ml.Boost`
`cv::SimpleBlobDetector::Params` is exported as
`cv.SimpleBlobDetector.Params`
For backward compatibility all classes are registered in the global
module with their mangling name containing scope information.
Example:
`cv::ml::Boost` has `cv.ml_Boost` alias to `cv.ml.Boost` type
Optimize cv::applyColorMap() for simple case
* Optimize cv::applyColorMap() for simple case
PR for 21640
For regular cv::Mat CV_8UC1 src, applying the colormap is simpler than calling the cv::LUT() mechanism.
* add support for src as CV_8UC3
src as CV_8UC3 is handled with a BGR2GRAY conversion, the same optimized code being used afterwards
* code style
rely on cv::Mat.ptr() to index data
* Move new implementation to ColorMap::operator()
Changes as suggested by reviewer
* style
improvements suggsted by reviewer
* typo
* tune parallel work
* better usage of parallel_for_
use nstripes parameter of parallel_for_
assume _lut is continuous to bring faster pixel indexing
optimize src/dst access by contiguous rows of pixels
do not locally copy the LUT any more, it is no more relevant with the new optimizations
* Added NEON support in builds for Windows on ARM
* Fixed `HAVE_CPU_NEON_SUPPORT` display broken during compiler test
* Fixed a build error prior to Visual Studio 2022
4.x: submodule or a class scope for exported classes
* feature: submodule or a class scope for exported classes
All classes are registered in the scope that corresponds to C++
namespace or exported class.
Example:
`cv::ml::Boost` is exported as `cv.ml.Boost`
`cv::SimpleBlobDetector::Params` is exported as
`cv.SimpleBlobDetector.Params`
For backward compatibility all classes are registered in the global
module with their mangling name containing scope information.
Example:
`cv::ml::Boost` has `cv.ml_Boost` alias to `cv.ml.Boost` type
* refactor: remove redundant GAPI aliases
* fix: use explicit string literals in CVPY_TYPE macro
* fix: add handling for class aliases
Use YuNet of fixed input shape to fix not-supported-dynamic-zero-shape for FaceDetectorYN
* use yunet with input of fixed shape
* update yunet used in face recognition regression
Thread Sanitizer identified an incorrect implementation of double checked locking.
Replaced it with a static, which therefore can only be created once.
Default FFMPEG VideoCapture backend to rtsp_flags=prefer_tcp
* Make the VideoCapture ffmpeg backends default rtsp connection type prefer_tcp.
* Ensure that the ffmpeg version of avformat is checked.
Per intel docs for libva, when vaDeriveImage fails vaCreateImage +
vaPutImage should be tried. This is important as mesa with AMD HW
will always fail because the image is interlaced so a indirect
method must be used to get the surface to/from and image
Fixes https://github.com/opencv/opencv/issues/21536
* Fix wrong MSAN errors.
Because Fortran is called in Lapack, MSAN does not think the memory
has been written even though it is the case.
MSAN does no support well cross-language memory analysis.
* Make a dedicated check.
- Add special case handling when submodule has the same name as parent
- `PyDict_SetItemString` doesn't steal reference, so reference count
should be explicitly decremented to transfer object life-time
ownership
- Add sanity checks for module registration input
- Add Python 2 and Python 3 reference counting handling
G-API: Wrap GStreamerSource
* Wrap GStreamerSource into python
* Fixed test skipping when can't make Gst-src
* Wrapped GStreamerPipeline class, added dummy test for it
* Fix no_gst testing
* Changed wrap for GStreamerPipeline::getStreamingSource() : now python-specific in-class method GStreamerPipeline::get_streaming_source()
* Added accuracy tests vs OCV:VideoCapture(Gstreamer)
* Add skipping when can't use VideoCapture(GSTREAMER);
Add better handling of GStreamer backend unavailable;
Changed video to avoid terminations
* Applying comments
* back to a separate get_streaming_source function, with comment
Co-authored-by: OrestChura <orest.chura@intel.com>
G-API: oneVPL DX11 inference
* Draft GPU infer
* Fix incorrect subresource_id for array of textures
* Fix for TheOneSurface in different Frames
* Turn on VPP param configuration
* Add cropIn params
* Remove infer sync sample
* Remove comments
* Remove DX11AllocResource extra init
* Add condition for NV12 processing in giebackend
* Add VPP frames pool param configurable
* -M Remove extra WARN & INFOs, Fix custom MAC
* Remove global vars from example, Fix some comments, Disable blobParam due to OV issue
* Conflict resolving
* Revert back pointer cast for cv::any
clang-cl defines both __clang__ and _MSC_VER, yet uses `#pragma GCC` to disable certain diagnostics.
At the time `-Wreturn-type-c-linkage` was reported by clang-cl.
This PR fixes this behavior by reordering defines.
- Add special case handling when submodule has the same name as parent
- `PyDict_SetItemString` doesn't steal reference, so reference count
should be explicitly decremented to transfer object life-time
ownership
- Add sanity checks for module registration input
Comment from Python documentation:
Unlike other functions that steal references, `PyModule_AddObject()` only
decrements the reference count of value on success.
This means that its return value must be checked, and calling code must
`Py_DECREF()` value manually on error.
GAPI: Add OAK backend
* Initial tests and cmake integration
* Add a public header and change tests
* Stub initial empty template for the OAK backend
* WIP
* WIP
* WIP
* WIP
* Runtime dai hang debug
* Refactoring
* Fix hang and debug frame data
* Fix frame size
* Fix data size issue
* Move test code to sample
* tmp refactoring
* WIP: Code refactoring except for the backend
* WIP: Add non-camera sample
* Fix samples
* Backend refactoring wip
* Backend rework wip
* Backend rework wip
* Remove mat encoder
* Fix namespace
* Minor backend fixes
* Fix hetero sample and refactor backend
* Change linking logic in the backend
* Fix oak sample
* Fix working with ins/outs in OAK island
* Trying to fix nv12 problem
* Make both samples work
* Small refactoring
* Remove meta args
* WIP refactoring kernel API
* Change in/out args API for kernels
* Fix build
* Fix cmake warning
* Partially address review comments
* Partially address review comments
* Address remaining comments
* Add memory ownership
* Change pointer-to-pointer to reference-to-pointer
* Remove unnecessary reference wrappers
* Apply review comments
* Check that graph contains only one OAK island
* Minor refactoring
* Address review comments
The Qt backend directly calls some OpenGL functions (glClear, glHint,
glViewport), but since OCV 4.5.5 the GL libraries are no longer part
of the global extra dependencies. When linking with "-Wl,--no-undefined"
this causes linker errors:
`opencv-4.5.5/modules/highgui/src/window_QT.cpp:3307: undefined reference to `glClear'`
Fixes: #21346
Related issues: #21299
* Fix compile against lapack-3.10.0
Fix compilation against lapack >= 3.9.1 and 3.10.0 while not breaking older versions
OpenCVFindLAPACK.cmake & CMakeLists.txt: determine OPENCV_USE_LAPACK_PREFIX from LAPACK_VERSION
hal_internal.cpp : Only apply LAPACK_FUNC to functions whose number of inputs depends on LAPACK_FORTRAN_STR_LEN in lapack >= 3.9.1
lapack_check.cpp : remove LAPACK_FUNC which is not OK as function are not used with input parameters (so lapack.h preprocessing of "LAPACK_xxxx(...)" is not applicable with lapack >= 3.9.1
If not removed lapack_check fails so LAPACK is deactivated in build (not want we want)
use OCV_ prefix and don't use Global, instead generate OCV_LAPACK_FUNC depending on CMake Conditions
Remove CONFIG from find_package(LAPACK) and use LAPACK_GLOBAL and LAPACK_NAME to figure out if using netlib's reference LAPACK implementation and how to #define OCV_LAPACK_FUNC(f)
* Fix typos and grammar in comments
Fixed threshold(THRESH_TOZERO) at imgproc(IPP)
* Fixed#16085: imgproc(IPP): wrong result from threshold(THRESH_TOZERO)
* 1. Added test cases with float where all bits of mantissa equal 1, min and max float as inputs
2. Used nextafterf instead of cast to hex
* Used float value in test instead of hex and casts
* Changed input value in test
When computing:
t1 = (bayer[1] + bayer[bayer_step] + bayer[bayer_step+2] + bayer[bayer_step*2+1])*G2Y;
there is a T (unsigned short or char) multiplied by an int which can overflow.
Then again, it is stored to t1 which is unsigned so the overflow disappears.
Keeping all unsigned is safer.
Further optimize DNN for RISC-V Vector.
* Optimize DNN on RVV by using vsetvl.
* Rename vl.
* Update fastConv by using setvl instead of mask.
* Fix fastDepthwiseConv
- QGLWidget changed to QOpenGLWidget in window_QT.h for Qt6 using
typedef OpenCVQtWidgetBase for handling Qt version
- Implement Qt6/OpenGL functionality in window_QT.cpp
- Swap QGLWidget:: function calls for OpenCVQtWidgetBase:: function calls
- QGLWidget::updateGL deprecated, swap to QOpenGLWidget::update for Qt6
- Add preprocessor definition to detect Qt6 -- HAVE_QT6
- Add OpenGLWidgets to qdeps list in highgui CMakeLists.txt
- find_package CMake command added for locating Qt module OpenGLWidgets
- Added check that Qt6::OpenGLWidgets component is found. Shut off Qt-openGL functionality if not found.
Fow now, it is possible to define valid rectangle for which some
functions overflow (e.g. br(), ares() ...).
This patch fixes the intersection operator so that it works with
any rectangle.
G-API: oneVPL merge DX11 acceleration
* Merge DX11 initial
* Fold conditions row in MACRO in utils
* Inject DeviceSelector
* Turn on DeviceSelector in DX11
* Change sharedLock logic & Move FMT checking in FrameAdapter c-tor
* Move out NumSuggestFrame to configure params
* Drain file source fix
* Fix compilation
* Force zero initializetion of SharedLock
* Fix some compiler warnings
* Fix integer comparison warnings
* Fix integers in sample
* Integrate Demux
* Fix compilation
* Add predefined names for some CfgParam
* Trigger CI
* Fix MultithreadCtx bug, Add Dx11 GetBlobParam(), Get rif of ATL CComPtr
* Fix UT: remove unit test with deprecated video from opencv_extra
* Add creators for most usable CfgParam
* Eliminate some warnings
* Fix warning in GAPI_Assert
* Apply comments
* Add VPL wrapped header with MSVC pragma to get rid of global warning masking
Added CV_PROP_RW macro to keypoints
* Added CV_PROP_RW macro to keypoints
As outlined in the feature request in the issue https://github.com/opencv/opencv/issues/21171 : the keypoints field has been made parsable by the bindings.
* Added test for keypoints
Added test to check if the CV_PROP_RW macro added in the previous commit makes keypoints public and accessible through the python API.
Audio MSMF: added the ability to set sample per second
* Audio MSMF: added the ability to set sample per second
* changed the valid sampling rate check
* fixed docs
* add test
* fixed warning
* fixed error
* fixed error
Update RVV backend for using Clang.
* Update cmake file of clang.
* Modify the RVV optimization on DNN to adapt to clang.
* Modify intrin_rvv: Disable some existing types.
* Modify intrin_rvv: Reinterpret instead of load&cast.
* Modify intrin_rvv: Update load&store without cast.
* Modify intrin_rvv: Rename vfredsum to fredosum.
* Modify intrin_rvv: Rewrite Check all/any by using vpopc.
* Modify intrin_rvv: Use reinterpret instead of c-style casting.
* Remove all macros which is not used in v_reinterpret
* Rename vpopc to vcpop according to spec.
* Fix integer overflow in cv::Luv2RGBinteger::process.
For LL=49, uu=205, vv=23, we end up with x=7373056 and y=458
which overflows y*x.
* imgproc(test): adjust test parameters to cover SIMD code
* dnn: LSTM optimisation
This uses the AVX-optimised fastGEMM1T for matrix multiplications where available, instead of the standard cv::gemm.
fastGEMM1T is already used by the fully-connected layer. This commit involves two minor modifications:
- Use unaligned access. I don't believe this involves any performance hit in on modern CPUs (Nehalem and Bulldozer onwards) in the case where the address is actually aligned.
- Allow for weight matrices where the number of columns is not a multiple of 8.
I have not enabled AVX-512 as I don't have an AVX-512 CPU to test on.
* Fix warning about initialisation order
* Remove C++11 syntax
* Fix build when AVX(2) is not available
In this case the CV_TRY_X macros are defined to 0, rather than being undefined.
* Minor changes as requested:
- Don't check hardware support for AVX(2) when dispatch is disabled for these
- Add braces
* Fix out-of-bounds access in fully connected layer
The old tail handling in fastGEMM1T implicitly rounded vecsize up to the next multiple of 8, and the fully connected layer implements padding up to the next multiple of 8 to cope with this. The new tail handling does not round the vecsize upwards like this but it does require that the vecsize is at least 8. To adapt to the new tail handling, the fully connected layer now rounds vecsize itself at the same time as adding the padding(which makes more sense anyway).
This also means that the fully connected layer always passes a vecsize of at least 8 to fastGEMM1T, which fixes the out-of-bounds access problems.
* Improve tail mask handling
- Use static array for generating tail masks (as requested)
- Apply tail mask to the weights as well as the input vectors to prevent spurious propagation of NaNs/Infs
* Revert whitespace change
* Improve readability of conditions for using AVX
* dnn(lstm): minor coding style changes, replaced left aligned load
[G-API] Fix issue of getting 1D Mat out of RMat::View
* Fix issue of getting 1D Mat out of RMat::View
- added test
- fixed for standalone too (removed Assert(dims.empty()))
* Fixed asVeiw() function for standalone
* Put more detailed comment
Add capacity to Videocapture to return the extraData from FFmpeg when required
* Update rawMode to append any extra data recieved during the initial negotiation of an RTSP stream or during the parsing of an MPEG4 file header.
For h264[5] RTSP streams this ensures the parameter sets if available are always returned on the first call to grab()/read() and has two purposes:
1) To ensure the parameter sets are available even if they are not transmitted in band. This is common for axis ip camera's.
2) To allow callers of VideoCapture::grab()[read()] to write to split the raw stream over multiple files by appending the parameter sets to the begining of any new files.
For (1) there is no alternative, for (2) if the parameter sets were provided in band it would be possible to parse the raw bit stream and search for the parameter sets however that would be a lot of work when that information is already provided by FFMPEG.
For MPEG4 files this information is only suplied in the header and is required for decoding.
Two properties are also required to enable the raw encoded bitstream to be written to multiple files, these are;
1) an indicator as to whether the last frame was a key frame or not - each new file needs to start at a key frame to avoid storing unusable frame diffs,
2) the length in bytes of the paramater sets contained in the last frame - required to split the paramater sets from the frame without having to parse the stream. Any call to VideoCapture::get(CAP_PROP_LF_PARAM_SET_LEN) returning a number greater than zero indicates the presense of a parameter set at the begining of the raw bitstream.
* Adjust test data to account for extraData
* Address warning.
* Change added property names and remove paramater set start code check.
* Output extra data on calls to retrieve instead of appending to the first packet.
* Reverted old test case and added new one to evaluate new functionality.
* Add missing definition.
* Remove flag from legacy api.
Add property to determine if returning extra data is supported.
Always allow extra data to be returned on calls to cap.retrieve()
Update test case.
* Update condition which indicates CAP_PROP_CODEC_EXTRADATA_INDEX is not supported in test case.
* Include compatibility for windows dll if not updated.
Enforce existing return status convention.
* Fix return error and missing test constraints.
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN
* Add WebNN backend for OpenCV DNN Module
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Add WebNN head files into OpenCV 3rd partiy files
Create webnn.hpp
update cmake
Complete README and add OpenCVDetectWebNN.cmake file
add webnn.cpp
Modify webnn.cpp
Can successfully compile the codes for creating a MLContext
Update webnn.cpp
Update README.md
Update README.md
Update README.md
Update README.md
Update cmake files and
update README.md
Update OpenCVDetectWebNN.cmake and README.md
Update OpenCVDetectWebNN.cmake
Fix OpenCVDetectWebNN.cmake and update README.md
Add source webnn_cpp.cpp and libary libwebnn_proc.so
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
update dnn.cpp
update op_webnn
update op_webnn
Update op_webnn.hpp
update op_webnn.cpp & hpp
Update op_webnn.hpp
Update op_webnn
update the skeleton
Update op_webnn.cpp
Update op_webnn
Update op_webnn.cpp
Update op_webnn.cpp
Update op_webnn.hpp
update op_webnn
update op_webnn
Solved the problems of released variables.
Fixed the bugs in op_webnn.cpp
Implement op_webnn
Implement Relu by WebNN API
Update dnn.cpp for better test
Update elementwise_layers.cpp
Implement ReLU6
Update elementwise_layers.cpp
Implement SoftMax using WebNN API
Implement Reshape by WebNN API
Implement PermuteLayer by WebNN API
Implement PoolingLayer using WebNN API
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Implement poolingLayer by WebNN API and add more detailed logs
Update dnn.cpp
Update dnn.cpp
Remove redundant codes and add more logs for poolingLayer
Add more logs in the pooling layer implementation
Fix the indent issue and resolve the compiling issue
Fix the build problems
Fix the build issue
FIx the build issue
Update dnn.cpp
Update dnn.cpp
* Fix the build issue
* Implement BatchNorm Layer by WebNN API
* Update convolution_layer.cpp
This is a temporary file for Conv2d layer implementation
* Integrate some general functions into op_webnn.cpp&hpp
* Update const_layer.cpp
* Update convolution_layer.cpp
Still have some bugs that should be fixed.
* Update conv2d layer and fc layer
still have some problems to be fixed.
* update constLayer, conv layer, fc layer
There are still some bugs to be fixed.
* Fix the build issue
* Update concat_layer.cpp
Still have some bugs to be fixed.
* Update conv2d layer, fully connected layer and const layer
* Update convolution_layer.cpp
* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)
* Delete bib19450.aux
* Add WebNN backend for OpenCV DNN Module
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Add WebNN head files into OpenCV 3rd partiy files
Create webnn.hpp
update cmake
Complete README and add OpenCVDetectWebNN.cmake file
add webnn.cpp
Modify webnn.cpp
Can successfully compile the codes for creating a MLContext
Update webnn.cpp
Update README.md
Update README.md
Update README.md
Update README.md
Update cmake files and
update README.md
Update OpenCVDetectWebNN.cmake and README.md
Update OpenCVDetectWebNN.cmake
Fix OpenCVDetectWebNN.cmake and update README.md
Add source webnn_cpp.cpp and libary libwebnn_proc.so
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
update dnn.cpp
update op_webnn
update op_webnn
Update op_webnn.hpp
update op_webnn.cpp & hpp
Update op_webnn.hpp
Update op_webnn
update the skeleton
Update op_webnn.cpp
Update op_webnn
Update op_webnn.cpp
Update op_webnn.cpp
Update op_webnn.hpp
update op_webnn
update op_webnn
Solved the problems of released variables.
Fixed the bugs in op_webnn.cpp
Implement op_webnn
Implement Relu by WebNN API
Update dnn.cpp for better test
Update elementwise_layers.cpp
Implement ReLU6
Update elementwise_layers.cpp
Implement SoftMax using WebNN API
Implement Reshape by WebNN API
Implement PermuteLayer by WebNN API
Implement PoolingLayer using WebNN API
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Implement poolingLayer by WebNN API and add more detailed logs
Update dnn.cpp
Update dnn.cpp
Remove redundant codes and add more logs for poolingLayer
Add more logs in the pooling layer implementation
Fix the indent issue and resolve the compiling issue
Fix the build problems
Fix the build issue
FIx the build issue
Update dnn.cpp
Update dnn.cpp
* Fix the build issue
* Implement BatchNorm Layer by WebNN API
* Update convolution_layer.cpp
This is a temporary file for Conv2d layer implementation
* Integrate some general functions into op_webnn.cpp&hpp
* Update const_layer.cpp
* Update convolution_layer.cpp
Still have some bugs that should be fixed.
* Update conv2d layer and fc layer
still have some problems to be fixed.
* update constLayer, conv layer, fc layer
There are still some bugs to be fixed.
* Update conv2d layer, fully connected layer and const layer
* Update convolution_layer.cpp
* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)
* Update dnn.cpp
* Fix Error in dnn.cpp
* Resolve duplication in conditions in convolution_layer.cpp
* Fixed the issues in the comments
* Fix building issue
* Update tutorial
* Fixed comments
* Address the comments
* Update CMakeLists.txt
* Offer more accurate perf test on native
* Add better perf tests for both native and web
* Modify per tests for better results
* Use more latest version of Electron
* Support latest WebNN Clamp op
* Add definition of HAVE_WEBNN macro
* Support group convolution
* Implement Scale_layer using WebNN
* Add Softmax option for native classification example
* Fix comments
* Fix comments
In case of very small negative h (e.g. -1e-40), with the current implementation,
you will go through the first condition and end up with h = 6.f, and will miss
the second condition.
issue #20617 addresses lack of warnings on
seamlessClone() function when src is None.
This commit adds source check using CV_Assert
therefore debugging would be easier.
Signed-off-by: nickjackolson <metedurlu@gmail.com>
Add a warning message using CV_LOG__WARNING().
This way api behaviour is preserved. Outputs are
the same but user gets an extra warning in case
fopen() fails to access image file for some reason.
This would help new users and also debugging
complex apps which use imread()
Signed-off-by: nickjackolson <metedurlu@gmail.com>