* Python wrapper for detail
* hide pyrotationwrapper
* copy code in pyopencv_rotationwarper.hpp
* move ImageFeatures MatchInfo and CameraParams in core/misc/
* add python test for detail
* move test_detail in test_stitching
* rename
Stitching: added functions to set warp interpolation mode (#13449)
* Added functions to set warp interpolation mode
* Use InterpolationFlags enum
* Improved getter/setter naming
* added performance test for compareHist
* compareHist reworked to use wide universal intrinsics
* Disabled vectorization for CV_COMP_CORREL and CV_COMP_BHATTACHARYYA if f64 is unsupported
* Enable Javascript bindings for photo module.
1. Enable the build flag in build_js.py.
2. Append js into WRAP list of photo’s CMakefiles.txt
3. Add photo module's API into JS API whitelist (embindgen.py)
Exposing the HDR imaging part of photo module.
[TODO]
Add tests
Fix opencv/doc/js_tutorials/
* [WIP] TODO: Add tests
* Remove TonemapDurand: algorithm patented in US, so moved to opencv_contrib
* Fix ningxin's comment: expose the base class.
* Add some more simple binding tests.
Also expose process function
* videoio(librealsense): fix pipeline start with config
fix to apply pipeline settings by passing config to start.
* videoio(librealsense): add support get props
add support get some props.
* Get/Set functions for BRISK parameters, issue #11527.
Allows setting threshold and octaves parameters after creating a brisk object. These parameters do not affect the initial pattern initialization and can be changed later without re-initialization.
* Fix doc parameter name.
* Brisk get/set functions tests. Check for correct value and make tests independent of default parameter values.
* Add dummy implementations for BRISK get/set functions not to break API in case someone has overloaded the Feature2d::BRISK interface. This makes BRISK different from the other detectors/descriptors on the other hand, where get/set functions are pure virtual in the interface.
* Added performance tests for hal::norm functions
* Added sum of absolute differences intrinsic
* norm implementation updated to use wide universal intrinsics
* improve and fix v_reduce_sad on VSX
More fixes for G-API tests (#13251)
* scalar comparator and more fixes for tests
* add weighted aligned
* white space
* more white space
* Add weighted test accuracy check enabled
GAPI (fluid): optimization of Separable filter (#13221)
* GAPI (fluid): Separable filter: performance test
* GAPI (fluid): enable all performance tests
* GAPI: separable filters: alternative code for Sobel
* GAPI (fluid): hide unused old code for Sobel filter
* GAPI (fluid): especial code for Sobel if U8 into S16
* GAPI (fluid): back to old code for Sobel
* GAPI (fluid): run_sepfilter3x3_impl() with CPU dispatcher
* GAPI (fluid): run_sepfilter3x3_impl(): fix compiler warnings
* GAPI (fluid): new engine for separable filters (but Sobel)
* GAPI (fluid): new performance engine for Sobel
* GAPI (fluid): Sepfilters performance: fixed compilation error
* custom OpenCL G-API kernel draft
* clean up and warnings fix
* more warnings
* white space
* new blank line at the EOF removed
* HAVE_OPENCL guard
* remove unnecessary ocl API call
* remove sum test workaround
* check if opencl activated
* fix std::str warning
* CPU fall back for symm7x7
* gpu test kernel draft
* adjust have opencl guard
* more guards
* one more attempt to adjust guards
* empty stub files and kernel source files creation in the test directory
* try to force auto generation
* one more attempt to force build
* remove symm7x7 custom from gapi module
* looks like that this version works properly on Win desktop
* clean up
* more clean up
* address some suggestions from Dmitry's review
* const kernel coefficients
* CV_Error in kernel + try to fix cpu fallback
* CV_Error_ instead CV_Error
* everything in one gapi_gpu_test.cpp
* fix warning
* remove kernel generation, add kernel string
* avoid generated code and ocl internal namespace
* fix misprint
* c_str
* dnn/Vulkan: don't init Vulkan runtime if using other backend/target
Don't need to explictly call a init API but will automatically
init Vulkan environment the first time to use an VkCom object.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: depress compilier warning for "-Wsign-promo"
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
- add infrastructure support for Power9/VSX3
- fix missing VSX flags on GCC4.9 and CLANG4(#13210, #13222)
- fix disable VSX optimzation on GCC by using flag ENABLE_VSX
- flag ENABLE_VSX is deprecated now, use CPU_BASELINE, CPU_DISPATCH instead
- add VSX3 to arithmetic dispatchable flags
* Added caching of agents execution sequence
* Merged linesRead() and nextWindow() methods on FluidAgent in one method
* Added caching of input lines for fluid::View
* Added caching of output lines for fluid::Buffer
* Fixed GAPI_Assert to work in standalone mode
G-API: Doxygen class reference
* G-API Doxygen documentation: covered cv::GComputation
* G-API Doxygen documentation: added sections on compile arguments
* G-API Doxygen documentation: restructuring & more text
* Added new sections (organized API reference into it);
* Documented GCompiled, compile args, backends, etc.
* G-API Doxygen documentation: documented GKernelPackage and added group for meta
* G-API: First steps with tutorial
* G-API Tutorial: First iteration
* G-API port of anisotropic image segmentation tutorial;
* Currently works via OpenCV only;
* Some new kernels have been required.
* G-API Tutorial: added chapters on execution code, inspection, and profiling
* G-API Tutorial: make Fluid kernel headers public
For some reason, these headers were not moved to the public
headers subtree during the initial development. Somehow it even
worked for the existing workloads.
* G-API Tutorial: Fix a couple of issues found during the work
* Introduced Phase & Sqrt kernels, OCV & Fluid versions
* Extended GKernelPackage to allow kernel removal & policies on include()
All the above stuff needs to be tested, tests will be added later
* G-API Tutorial: added chapter on running Fluid backend
* G-API Tutorial: fix a number of issues in the text
* G-API Tutorial - some final updates
- Fixed post-merge issues after Sobel kernel renaming;
- Simplified G-API code a little bit;
- Put a conclusion note in text.
* G-API Tutorial - fix build issues in test/perf targets
Public headers were refactored but tests suites were not updated in time
* G-API Tutorial: Added tests & reference docs on new kernels
* Phase
* Sqrt
* G-API Tutorial: added link to the tutorial from the main module doc
* G-API Tutorial: Added tests on new GKernelPackage functionality
* G-API Tutorial: Extended InRange tests to cover 32F
* G-API Tutorial: Misc fixes
* Avoid building examples when gapi module is not there
* Added a volatile API disclaimer to G-API root documentation page
* G-API Tutorial: Fix perf tests build issue
This change came from master where Fluid kernels are still used
incorrectly.
* G-API Tutorial: Fixed channels support in Sqrt/Phase fluid kernels
Extended tests to cover this case
* G-API Tutorial: Fix text problems found on team review
cap_libv4l depends on an external library (libv4l) yet is still larger
(1966 loc vs 1822 loc).
It was initially introduced copy pasting cap_v4l in order to offload
various color conversions to libv4l.
However nowadays we handle most of the needed color conversions inside
OpenCV. Our own implementation is better tested and (probably) also
better performing. (as it can optionally leverage IPP/ OpenCL)
Currently cap_v4l is better maintained and generally the code is in
better shape. There is however an API
difference in getting unconverted frames:
* on cap_libv4l one need to set `CV_CAP_MODE_GRAY=1` or
`CV_CAP_MODE_YUYV=1`
* on cap_v4l one needs to set `CV_CAP_PROP_CONVERT_RGB=0`
the latter is more flexible though as it also allows accessing undecoded
JPEG images.
fixes#4563
[evolution] Stitching for OpenCV 4.0
* stitching: wrap Stitcher::create for bindings
* provide method for consistent stitcher usage across languages
* samples: add python stitching sample
* port cpp stitching sample to python
* stitching: consolidate Stitcher create methods
* remove Stitcher::createDefault, it returns Stitcher, not Ptr<Stitcher> -> inconsistent API
* deprecate cv::createStitcher and cv::createStitcherScans in favor of Stitcher::create
* stitching: avoid anonymous enum in Stitcher
* ORIG_RESOL should be double
* add documentatiton
* stitching: improve documentation in Stitcher
* stitching: expose estimator in Stitcher
* remove ABI hack
* stitching: drop try_use_gpu flag
* OCL will be used automatically through T-API in OCL-enable paths
* CUDA won't be used unless user sets CUDA-enabled classes manually
* stitching: drop FeaturesFinder
* use Feature2D instead of FeaturesFinder
* interoperability with features2d module
* detach from dependency on xfeatures2d
* features2d: fix compute and detect to work with UMat vectors
* correctly pass UMats as UMats to allow OCL paths
* support vector of UMats as output arg
* stitching: use nearest interpolation for resizing masks
* fix warnings
* Support for Matx read/write by FileStorage
* Only empty filestorage read now produces default Matx. Split Matx IO test into smaller units. Test checks for exception thrown if reading a Mat into a Matx of different size.
* moved DIS optical flow from opencv_contrib to opencv, moved TVL1 from opencv to opencv_contrib
* fixed compile warning
* TVL1 optical flow example moved to opencv_contrib
* significantly reduced OpenCV binary size by disabling IPP calls in some OpenCV functions: Sobel, Scharr, medianBlur, GaussianBlur, filter2D, mean, meanStdDev, norm, sum, minMaxIdx, sort.
* re-enable IPP in norm, since it's much faster (without adding too much space overhead)
* removed C API in the following modules: photo, video, imgcodecs, videoio
* trying to fix various compile errors and warnings on Windows and Linux
* continue to fix compile errors and warnings
* continue to fix compile errors, warnings, as well as the test failures
* trying to resolve compile warnings on Android
* Update cap_dc1394_v2.cpp
fix warning from the new GCC
G-API GPU-OpenCL backend (#13008)
* gpu/ocl backend core
* accuracy tests added and adjusted + license headers
* GPU perf. tests added; almost all adjusted to pass
* all tests adjusted and passed - ready for pull request
* missing license headers
* fix warning (workaround RGB2Gray)
* fix c++ magic
* precompiled header
* white spaces
* try to fix warning and blur test
* try to fix Blur perf tests
* more alignments with the latest cpu backend
* more gapi tests refactoring + 1 more UB issue fix + more informative tolerance exceed reports
* white space fix
* try workaround for SumTest
* GAPI_EXPORTS instead CV_EXPORTS
V4L (V4L2): Refactoring. Added missed camera properties. Fixed getting `INF` for some properties. Singlethread as always (#12893)
* cap_v4l:
1 Added cap_properties verbalization.
2 Set Get of properties elementary refactoring.
3 Removed converting parameters to/from [0,1] range.
4 Added all known conversion from V4L2_CID_* to CV_CAP_PROP_*
* cap_v4l:
1. Removed all query for parameters range.
2. Refactored capture initialization.
3. Added selecting input channel by CV_CAP_PROP_MODE. Default value -1 the channels not changed.
* cap_v4l:
1. Refactoring of Convert To RGB
* cap_v4l:
1. Fixed use of video buffer index.
2. Removed extra memcopy for grab image.
3. Removed device closing from autosetup_capture_mode_v4l2
* cap_v4l:
1. The `goto` was eliminated
2. Fixed use of temporary buffer index for V4L2_PIX_FMT_SN9C10X
3. Fixed use of the bufferIndex
4. Removed trailing spaces and unused variables.
* cap_v4l:
1. Alias for capture->buffers[capture->bufferIndex]
2. Reduced size of data for memcpy: bytesused instead of length
3. Refactoring. Code duplication. More info for debug
* cap_v4l:
1. Added the ability to grab and retrieveFrame independently several times
* cap_v4l:
1. Not need to close/open device for new capture parameters applying.
2. Removed using of device name as a flag that the capture is closed. Added sufficient function.
3. Refactoring. Added requestBuffers and createBuffers
* cap_v4l:
1. Added tryIoctl with `select` like was in mainloop_v4l2.
2. Fixed buffer request for device without closing the device.
3. Some static function moved to CvCaptureCAM_V4L
4. Removed unused defines
* cap_v4l:
1. Thread-safe now
* cap_v4l:
1. Fixed thread-safe destructor
2. Fixed FPS setting
* Missed brake
* Removed thread-safety
* cap_v4l:
1. Reverted conversion parameters to/from [0,1] by default for backward compatibility.
2. Added setting for turn off compatibility mode: set CV_CAP_PROP_MODE to 65536
3. Most static functions moved to CvCaptureCAM_V4L
4. Refactoring of icvRetrieveFrameCAM_V4L and using of frame_allocated flag
* cap_v4l:
1. Added conversion to RGB from NV12, NV21
2. Refactoring. Removed wrappers for known format conversions.
* Added `CAP_PROP_CHANNEL` to the enum VideoCaptureProperties.
CAP_V4L migrated to use VideoCaptureProperties.
* 1. Update comments.
2. Environment variable `OPENCV_VIDEOIO_V4L_RANGE_NORMALIZED` for setting default backward compatibility mode.
3. Revert getting of `CAP_PROP_MODE` as fourcc code in backward compatibility mode.
* videoio: update cap_v4l - compatibilityMode => normalizePropRange
* videoio(test): V4L2 MJPEG test
`v4l2-ctl --list-formats` should have 'MJPG' entry
* videoio: fix buffer initialization
to avoid "munmap: Invalid argument" messages
* Updated boxFilter implementations to use wide universal intrinsics
* boxFilter implementation moved to separate file
* Replaced ROUNDUP macro with roundUp() function
This is a workaround for GPU hang on heavy convolution workload (> 10 GFLOPS).
e.g. ResNet101_DUC_HDC
For the long time task, vkWaitForFences() return without error but next call on
vkQueueSubmit() return -4, i.e. "VK_ERROR_DEVICE_LOST" and driver reports GPU hang.
Need more investigation on root cause of GPU hang and need to optimize convolution shader
to reduce process time.
During the cluster-based detection of circle grids, the detected circle
pattern has to be mapped to 3D-points. When doing this the width (i.e.
more circles) and height (i.e. less circles) of the pattern need to
be identified in image coordinates.
Until now this was done by assuming that the shorter side in image
coordinates (length in pixels) corresponds to the height in 3D.
This assumption does not hold if we look at the pattern from
a perspective where the projection of the width is shorter
than the projection of the height. This in turn lead to misdetections in
although the circle pattern was clearly visible.
Instead count how many circles have been detected along two edges of the
projected quadrangle and use the one with more circles as width and the
one with less as height.
* integrated the new C++ persistence; removed old persistence; most of OpenCV compiles fine! the tests have not been run yet
* fixed multiple bugs in the new C++ persistence
* fixed raw size of the parsed empty sequences
* [temporarily] excluded obsolete applications traincascade and createsamples from build
* fixed several compiler warnings and multiple test failures
* undo changes in cocoa window rendering (that was fixed in another PR)
* fixed more compile warnings and the remaining test failures (hopefully)
* trying to fix the last little warning
* Fix reading of black-and-white (thresholded) TIFF images
I recently updated my local OpenCV version to 3.4.3 and found out that
I could not read my TIFF images related to my project. After debugging I
found out that there has been some static analysis fixes made
that accidentally have broken reading those black-and-white TIFF images.
Commit hash in which reading of mentioned TIFF images has been broken:
cbb1e867e5
Basically the fix is to revert back to the same functionality that has been there before,
when black-and-white images are read bpp (bitspersample) is 1.
Without the case 1: this TiffDecoder::readHeader() function always return false.
* Added type and default error message
* Added stdexcept include
* Use CV_Error instead of throw std::runtime_error
* imgcodecs(test): add TIFF B/W decoding tests