cap_libv4l depends on an external library (libv4l) yet is still larger
(1966 loc vs 1822 loc).
It was initially introduced copy pasting cap_v4l in order to offload
various color conversions to libv4l.
However nowadays we handle most of the needed color conversions inside
OpenCV. Our own implementation is better tested and (probably) also
better performing. (as it can optionally leverage IPP/ OpenCL)
Currently cap_v4l is better maintained and generally the code is in
better shape. There is however an API
difference in getting unconverted frames:
* on cap_libv4l one need to set `CV_CAP_MODE_GRAY=1` or
`CV_CAP_MODE_YUYV=1`
* on cap_v4l one needs to set `CV_CAP_PROP_CONVERT_RGB=0`
the latter is more flexible though as it also allows accessing undecoded
JPEG images.
fixes#4563
[evolution] Stitching for OpenCV 4.0
* stitching: wrap Stitcher::create for bindings
* provide method for consistent stitcher usage across languages
* samples: add python stitching sample
* port cpp stitching sample to python
* stitching: consolidate Stitcher create methods
* remove Stitcher::createDefault, it returns Stitcher, not Ptr<Stitcher> -> inconsistent API
* deprecate cv::createStitcher and cv::createStitcherScans in favor of Stitcher::create
* stitching: avoid anonymous enum in Stitcher
* ORIG_RESOL should be double
* add documentatiton
* stitching: improve documentation in Stitcher
* stitching: expose estimator in Stitcher
* remove ABI hack
* stitching: drop try_use_gpu flag
* OCL will be used automatically through T-API in OCL-enable paths
* CUDA won't be used unless user sets CUDA-enabled classes manually
* stitching: drop FeaturesFinder
* use Feature2D instead of FeaturesFinder
* interoperability with features2d module
* detach from dependency on xfeatures2d
* features2d: fix compute and detect to work with UMat vectors
* correctly pass UMats as UMats to allow OCL paths
* support vector of UMats as output arg
* stitching: use nearest interpolation for resizing masks
* fix warnings
* Support for Matx read/write by FileStorage
* Only empty filestorage read now produces default Matx. Split Matx IO test into smaller units. Test checks for exception thrown if reading a Mat into a Matx of different size.
* moved DIS optical flow from opencv_contrib to opencv, moved TVL1 from opencv to opencv_contrib
* fixed compile warning
* TVL1 optical flow example moved to opencv_contrib
* significantly reduced OpenCV binary size by disabling IPP calls in some OpenCV functions: Sobel, Scharr, medianBlur, GaussianBlur, filter2D, mean, meanStdDev, norm, sum, minMaxIdx, sort.
* re-enable IPP in norm, since it's much faster (without adding too much space overhead)
* removed C API in the following modules: photo, video, imgcodecs, videoio
* trying to fix various compile errors and warnings on Windows and Linux
* continue to fix compile errors and warnings
* continue to fix compile errors, warnings, as well as the test failures
* trying to resolve compile warnings on Android
* Update cap_dc1394_v2.cpp
fix warning from the new GCC
G-API GPU-OpenCL backend (#13008)
* gpu/ocl backend core
* accuracy tests added and adjusted + license headers
* GPU perf. tests added; almost all adjusted to pass
* all tests adjusted and passed - ready for pull request
* missing license headers
* fix warning (workaround RGB2Gray)
* fix c++ magic
* precompiled header
* white spaces
* try to fix warning and blur test
* try to fix Blur perf tests
* more alignments with the latest cpu backend
* more gapi tests refactoring + 1 more UB issue fix + more informative tolerance exceed reports
* white space fix
* try workaround for SumTest
* GAPI_EXPORTS instead CV_EXPORTS
V4L (V4L2): Refactoring. Added missed camera properties. Fixed getting `INF` for some properties. Singlethread as always (#12893)
* cap_v4l:
1 Added cap_properties verbalization.
2 Set Get of properties elementary refactoring.
3 Removed converting parameters to/from [0,1] range.
4 Added all known conversion from V4L2_CID_* to CV_CAP_PROP_*
* cap_v4l:
1. Removed all query for parameters range.
2. Refactored capture initialization.
3. Added selecting input channel by CV_CAP_PROP_MODE. Default value -1 the channels not changed.
* cap_v4l:
1. Refactoring of Convert To RGB
* cap_v4l:
1. Fixed use of video buffer index.
2. Removed extra memcopy for grab image.
3. Removed device closing from autosetup_capture_mode_v4l2
* cap_v4l:
1. The `goto` was eliminated
2. Fixed use of temporary buffer index for V4L2_PIX_FMT_SN9C10X
3. Fixed use of the bufferIndex
4. Removed trailing spaces and unused variables.
* cap_v4l:
1. Alias for capture->buffers[capture->bufferIndex]
2. Reduced size of data for memcpy: bytesused instead of length
3. Refactoring. Code duplication. More info for debug
* cap_v4l:
1. Added the ability to grab and retrieveFrame independently several times
* cap_v4l:
1. Not need to close/open device for new capture parameters applying.
2. Removed using of device name as a flag that the capture is closed. Added sufficient function.
3. Refactoring. Added requestBuffers and createBuffers
* cap_v4l:
1. Added tryIoctl with `select` like was in mainloop_v4l2.
2. Fixed buffer request for device without closing the device.
3. Some static function moved to CvCaptureCAM_V4L
4. Removed unused defines
* cap_v4l:
1. Thread-safe now
* cap_v4l:
1. Fixed thread-safe destructor
2. Fixed FPS setting
* Missed brake
* Removed thread-safety
* cap_v4l:
1. Reverted conversion parameters to/from [0,1] by default for backward compatibility.
2. Added setting for turn off compatibility mode: set CV_CAP_PROP_MODE to 65536
3. Most static functions moved to CvCaptureCAM_V4L
4. Refactoring of icvRetrieveFrameCAM_V4L and using of frame_allocated flag
* cap_v4l:
1. Added conversion to RGB from NV12, NV21
2. Refactoring. Removed wrappers for known format conversions.
* Added `CAP_PROP_CHANNEL` to the enum VideoCaptureProperties.
CAP_V4L migrated to use VideoCaptureProperties.
* 1. Update comments.
2. Environment variable `OPENCV_VIDEOIO_V4L_RANGE_NORMALIZED` for setting default backward compatibility mode.
3. Revert getting of `CAP_PROP_MODE` as fourcc code in backward compatibility mode.
* videoio: update cap_v4l - compatibilityMode => normalizePropRange
* videoio(test): V4L2 MJPEG test
`v4l2-ctl --list-formats` should have 'MJPG' entry
* videoio: fix buffer initialization
to avoid "munmap: Invalid argument" messages
* Updated boxFilter implementations to use wide universal intrinsics
* boxFilter implementation moved to separate file
* Replaced ROUNDUP macro with roundUp() function
This is a workaround for GPU hang on heavy convolution workload (> 10 GFLOPS).
e.g. ResNet101_DUC_HDC
For the long time task, vkWaitForFences() return without error but next call on
vkQueueSubmit() return -4, i.e. "VK_ERROR_DEVICE_LOST" and driver reports GPU hang.
Need more investigation on root cause of GPU hang and need to optimize convolution shader
to reduce process time.
During the cluster-based detection of circle grids, the detected circle
pattern has to be mapped to 3D-points. When doing this the width (i.e.
more circles) and height (i.e. less circles) of the pattern need to
be identified in image coordinates.
Until now this was done by assuming that the shorter side in image
coordinates (length in pixels) corresponds to the height in 3D.
This assumption does not hold if we look at the pattern from
a perspective where the projection of the width is shorter
than the projection of the height. This in turn lead to misdetections in
although the circle pattern was clearly visible.
Instead count how many circles have been detected along two edges of the
projected quadrangle and use the one with more circles as width and the
one with less as height.
* integrated the new C++ persistence; removed old persistence; most of OpenCV compiles fine! the tests have not been run yet
* fixed multiple bugs in the new C++ persistence
* fixed raw size of the parsed empty sequences
* [temporarily] excluded obsolete applications traincascade and createsamples from build
* fixed several compiler warnings and multiple test failures
* undo changes in cocoa window rendering (that was fixed in another PR)
* fixed more compile warnings and the remaining test failures (hopefully)
* trying to fix the last little warning
* Fix reading of black-and-white (thresholded) TIFF images
I recently updated my local OpenCV version to 3.4.3 and found out that
I could not read my TIFF images related to my project. After debugging I
found out that there has been some static analysis fixes made
that accidentally have broken reading those black-and-white TIFF images.
Commit hash in which reading of mentioned TIFF images has been broken:
cbb1e867e5
Basically the fix is to revert back to the same functionality that has been there before,
when black-and-white images are read bpp (bitspersample) is 1.
Without the case 1: this TiffDecoder::readHeader() function always return false.
* Added type and default error message
* Added stdexcept include
* Use CV_Error instead of throw std::runtime_error
* imgcodecs(test): add TIFF B/W decoding tests
G-API: Introduce new `reshape()` API (#12990)
* Moved initFluidUnits, initLineConsumption, calcLatency, calcSkew to separate functions
* Added Fluid::View::allocate method (moved allocation logic from constructor)
* Changed util::zip to util::indexed, utilized collectInputMeta in GFluidExecutable constructor
* Added makeReshape method to FluidExecutable
* Removed m_outputRoi from GFluidExecutable
* Added reshape feature
* Added switch of resize mapper if agent ratio was changed
* Added more TODOs and renamed a function
* G-API reshape(): add missing `override` specifiers
Fix warnings on all platforms
Made scale parameter optional for mul kernel wrapper (#12949)
* Added missed operator*(GMat, GMat). Made scale parameter optional for mul kernel.
* Fixed perf test for mul(GMat, GMat) kernel
* Removed operator*(GMat, GMat) as not needed
* RGB2RGB initially rewritten
* NEON impl removed
* templated version added for ushort, float
* data copying allowed for RGB2RGB
* inplace processing fixed
* fields to local vars
* no zeroupper until it's fixed
* vx_cleanup() added back
- initialize arithmetic dispatcher
- add new universal intrinsic v_absdiffs
- add new universal intrinsic v_pack_b
- add accumulate version of universal intrinsic v_round
- fix sse/avx2:uint8 multiplication overflow
- reimplement arithmetic, logic and comparison operations into wide universal intrinsics
with full support for all types
- reimplement IPP arithmetic, logic and comparison operations in a sperate file arithm_ipp.hpp
- avoid scalar multiplication if scaling factor eq 1 and use integer multiplication
- move C arithmetic operations to precomp.hpp and delete [arithm_simd|arithm_core].hpp
- add compatibility with new opencv4 divide policy
* dnn: Add a Vulkan based backend
This commit adds a new backend "DNN_BACKEND_VKCOM" and a
new target "DNN_TARGET_VULKAN". VKCOM means vulkan based
computation library.
This backend uses Vulkan API and SPIR-V shaders to do
the inference computation for layers. The layer types
that implemented in DNN_BACKEND_VKCOM include:
Conv, Concat, ReLU, LRN, PriorBox, Softmax, MaxPooling,
AvePooling, Permute
This is just a beginning work for Vulkan in OpenCV DNN,
more layer types will be supported and performance
tuning is on the way.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/vulkan: Add FindVulkan.cmake to detect Vulkan SDK
In order to build dnn with Vulkan support, need installing
Vulkan SDK and setting environment variable "VULKAN_SDK" and
add "-DWITH_VULKAN=ON" to cmake command.
You can download Vulkan SDK from:
https://vulkan.lunarg.com/sdk/home#linux
For how to install, see
https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.htmlhttps://vulkan.lunarg.com/doc/sdk/latest/windows/getting_started.htmlhttps://vulkan.lunarg.com/doc/sdk/latest/mac/getting_started.html
respectively for linux, windows and mac.
To run the vulkan backend, also need installing mesa driver.
On Ubuntu, use this command 'sudo apt-get install mesa-vulkan-drivers'
To test, use command '$BUILD_DIR/bin/opencv_test_dnn --gtest_filter=*VkCom*'
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: dynamically load Vulkan runtime
No compile-time dependency on Vulkan library.
If Vulkan runtime is unavailable, fallback to CPU path.
Use environment "OPENCL_VULKAN_RUNTIME" to specify path to your
own vulkan runtime library.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: Add a python script to compile GLSL shaders to SPIR-V shaders
The SPIR-V shaders are in format of text-based 32-bit hexadecimal
numbers, and inserted into .cpp files as unsigned int32 array.
* dnn/Vulkan: Put Vulkan headers into 3rdparty directory and some other fixes
Vulkan header files are copied from
https://github.com/KhronosGroup/Vulkan-Docs/tree/master/include/vulkan
to 3rdparty/include
Fix the Copyright declaration issue.
Refine OpenCVDetectVulkan.cmake
* dnn/Vulkan: Add vulkan backend tests into existing ones.
Also fixed some test failures.
- Don't use bool variable as uniform for shader
- Fix dispathed group number beyond max issue
- Bypass "group > 1" convolution. This should be support in future.
* dnn/Vulkan: Fix multiple initialization in one thread.
This is a fix to the signature of static function
collectCalibrationData() and clean-up for #12772. Since fallback scheme
in calibration method selection is not used anymore. As an input
parameter, iFixedPoint should be passed by value according to the OpenCV
coding style guide.
* Renamed Sobel operator GAPI kernel to match with OpenCV naming rules
* Fixed perf tests
* Small refactoring to check CI issue
* Refactored alignment for kernel wrappers in imgproc.hpp
* Update videoio.hpp
add VideoCapturePropertie for clossbar input pin setting
* Update cap_dshow.cpp
For some kind of capture card, such as "avermedia cv710 " , it use SerialDigital as input pin and so it can not work.
Here added new PhysicalConnectorType enumeration: PhysConn_Video_YRYBY and PhysConn_Video_SerialDigital to support it.
And also provide new property parameter CAP_CROSSBAR_INPIN_TYPE to set the crossbar input pin type which will be used in videoInput::start(int deviceID, videoDevice *VD):
" if(VD->useCrossbar)
{
DebugPrintOut("SETUP: Checking crossbar\n");
routeCrossbar(&VD->pCaptureGraph, &VD->pVideoInputFilter, VD->connection, CAPTURE_MODE);
}
"
And at last ,fixed one issue for function setSizeAndSubtype, added code
pVih->rcSource.top = pVih->rcSource.left = pVih->rcTarget.top =pVih->rcTarget.left=0;
pVih->rcSource.right = pVih->rcTarget.right= attemptWidth;
pVih->rcSource.bottom = pVih->rcTarget.bottom = attemptHeight;
without these code , rcSource and rcTarget will keeping use default resolution and cause fail in hr = VD->streamConf->SetFormat(VD->pAmMediaType) and cannot find suitable MediaType.
Tested with python3 and mfc (Avermedia cv710)
Python3 code:
import cv2
print("test cv")
cap=cv2.VideoCapture(0)
cap.set(5,60)
cap.set(3,1920)
cap.set(4,1080)
cap.set(31,6)
ret,img=cap.read()
cv2.namedWindow("cap",cv2.WINDOW_NORMAL)
cv2.resizeWindow("cap",960,640);
while True:
ret,img=cap.read()
if ret==False:
continue
cv2.imshow("cap",img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
MFC code:
void CcvtestDlg::OnBnClickedButton1()
{
VideoCapture cap(0);
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_CROSSBAR_INPIN_TYPE , 6);
Mat img;
namedWindow("test", WINDOW_NORMAL);
resizeWindow("test", 960, 640);
while (1)
{
if (cap.read(img))
{
imshow("test", img);
if ('q' ==waitKey(1))
break;
}
}
destroyAllWindows();
cap.release();
}
* Update cap_dshow.cpp
* Update videoio.hpp
move enum value of CAP_CROSSBAR_INPIN_TYPE to the end of list
* Update videoio.hpp
* Update cap_dshow.cpp
removed trailing whitespace
* Update test_camera.cpp
Add test for capture device using PhysConn_Video_SerialDigital as crossbar input pin
* Update test_camera.cpp
Correction of misunderstanding about how to add test case.
More accurate pinhole camera calibration with imperfect planar target (#12772)
43 commits:
* Add derivatives with respect to object points
Add an output parameter to calculate derivatives of image points with
respect to 3D coordinates of object points. The output jacobian matrix
is a 2Nx3N matrix where N is the number of points.
This commit introduces incompatibility to old function signature.
* Set zero for dpdo matrix before using
dpdo is a sparse matrix with only non-zero value close to major
diagonal. Set it to zero because only elements near major diagonal are
computed.
* Add jacobian columns to projectPoints()
The output jacobian matrix of derivatives with respect to coordinates of
3D object points are added. This might break callers who assume the
columns of jacobian matrix.
* Adapt test code to updated project functions
The test cases for projectPoints() and cvProjectPoints2() are updated to
fit new function signatures.
* Add accuracy test code for dpdo
* Add badarg test for dpdo
* Add new enum item for new calibration method
CALIB_RELEASE_OBJECT is used to whether to release 3D coordinates of
object points. The method was proposed in: K. H. Strobl and G. Hirzinger.
"More Accurate Pinhole Camera Calibration with Imperfect Planar Target".
In Proceedings of the IEEE International Conference on Computer Vision
(ICCV 2011), 1st IEEE Workshop on Challenges and Opportunities in Robot
Perception, Barcelona, Spain, pp. 1068-1075, November 2011.
* Add releasing object method into internal function
It's a simple extension of the standard calibration scheme. We choose to
fix the first and last object point and a user-selected fixed point.
* Add interfaces for extended calibration method
* Refine document for calibrateCamera()
When releasing object points, only the z coordinates of the
objectPoints[0].back is fixed.
* Add link to strobl2011iccv paper
* Improve documentation for calibrateCamera()
* Add implementations of wrapping calibrateCamera()
* Add checking for params of new calibration method
If input parameters are not qualified, then fall back to standard
calibration method.
* Add camera calibration method of releasing object
The current implementation is equal to or better than
https://github.com/xoox/calibrel
* Update doc for CALIB_RELEASE_OBJECT
CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with
potentially less precise and less stable in some rare cases.
* Add RELEASE_OBJECT calibration to tutorial code
To select the calibration method of releasing object points, a command
line parameter `-d=<number>` should be provided.
* Update tutorial doc for camera_calibration
If the method of releasing object points is merged into OpenCV. It will
be expected to be firstly released in 4.1, I think.
* Reduce epsilon for cornerSubPix()
Epsilon of 0.1 is a bigger one. Preciser corner positions are required
with calibration method of releasing object.
* Refine camera calibration tutorial
The hypothesis coordinates are used to indicate which distance must be
measured between two specified object points.
* Update sample calibration code method selection
Similar to camera_calibration tutorial application, a command line
argument `-dt=<number>` is used to select the calibration method.
* Add guard to flags of cvCalibrateCamera2()
cvCalibrateCamera2() doesn't accept CALIB_RELEASE_OBJECT unless overload
interface is added in the future.
* Simplify fallback when iFixedPoint is out of range
* Refactor projectPoints() to keep compatibilities
* Fix arg string "Bad rvecs header"
* Read calibration flags from test data files
Instead of being hard coded into source file, the calibration flags will
be read from test data files.
opencv_extra/testdata/cv/cameracalibration/calib?.dat must be sync with
the test code.
* Add new C interface of cvCalibrateCamera4()
With this new added C interface, the extended calibration method with
CALIB_RELEASE_OBJECT can be called by C API.
* Add regression test of extended calibration method
It has been tested with new added test data in xoox:calib-release-object
branch of opencv_extra.
* Fix assertion in test_cameracalibration.cpp
The total number of refined 3D object coordinates is checked.
* Add checker for iFixedPoint in cvCalibrateCamera4
If iFixedPoint is out of rational range, fall back to standard method.
* Fix documentation for overloaded calibrateCamera()
* Remove calibration flag of CALIB_RELEASE_OBJECT
The method selection is based on the range of the index of fixed point.
For minus values, standard calibration method will be chosen. Values in
a rational range will make the object-releasing calibration method
selected.
* Use new interfaces instead of function overload
Existing interfaces are preserved and new interfaces are added. Since
most part of the code base are shared, calibrateCamera() is now a
wrapper function of calibrateCameraRO().
* Fix exported name of calibrateCameraRO()
* Update documentation for calibrateCameraRO()
The circumstances where this method is mostly helpful are described.
* Add note on the rigidity of the calibration target
* Update documentation for calibrateCameraRO()
It is clarified that iFixedPoint is used as a switch to select
calibration method. If input data are not qualified, exceptions will be
thrown instead of fallback scheme.
* Clarify iFixedPoint as switch and remove fallback
iFixedPoint is now used as a switch for calibration method selection. No
fallback scheme is utilized anymore. If the input data are not
qualified, exceptions will be thrown.
* Add badarg test for object-releasing method
* Fix document format of sample list
List items of same level should be indented the same way. Otherwise they
will be formatted as nested lists by Doxygen.
* Add brief intro for objectPoints and imagePoints
* Sync tutorial to sample calibration code
* Update tutorial compatibility version to 4.0
"as opposed to" is a phrase of opposed meaning distinguished from or in contrast with. e.g., "an approach that is theoretical as opposed to practical"
synonyms: in contrast with, as against, as contrasted with, rather than, instead of, as an alternative to
example: "we use only steam, as opposed to chemical products, to clean our house"
* G-API Documentation: first submission
This PR introduces a number of new OpenCV documentation chapters for
Graph API module.
In particular, the following topics are covered:
- Introduction & background information;
- High-level design overview;
- Kernel API;
- Pipeline example.
All changes are done in Markdown files, no headers, etc modified.
Doxygen references for main API classes will be added later.
Also, a tutorial will be introduced soon (in the common Tutorials place)
* G-API Documentation - fix warnings & trailing whitespaces
* G-API Documentation: address review issues
* G-API Documentation: export code snippets to compileable files
* gapi: move documentation samples
* rewrote the line segment intersection function to make the static analyzer happy
* fixed bug with improper "no intersection" detection in some of corner cases
* fixed bug with improper "no intersection" detection in some of corner cases
Exceptions caught by value incur needless cost in C++, most of them can
be caught by const-reference, especially as nearly none are actually
used. This could allow compiler generate a slightly more efficient code.
- This is to accommodate the variabiilty in floating-point operations in new platforms/compilers
- Specifically due to the error margin found in NVIDIA Jetson TX2
* js: update build script
- support emscipten 1.38.12 (wasm is ON by default)
- verbose build messages
* js: use builtin Math functions
* js: disable tracing code completelly
Several Resize optimizations count on fetching multiple input lines at
once to do interpolation more efficiently.
At the moment, Fluid backend supports only LPI=1 for Resize kernels.
This patch introduces scheduling support for Resizes with LPI>1 and
covers these cases with new tests.
The support is initially written by Ruslan Garnov.
* Make cocoa windows draw faster
* Use a CALayer for rendering when possible Uses GPU to scale images, which is important because retina macs will want window sizes much larger (in pixels) than the image
* Fix mouse logic for cocoa windows
* Only halve resolution on retina if image is larger than display
- improve cpu dispatching calls to allow more SIMD extentions
(SSE4.1, AVX2, VSX)
- wide universal intrinsics
- replace dummy v_expand with v_expand_low
- replace v_expand + v_mul_wrap with v_mul_expand for product accumulate operations
- use FMA for accumulate operations
- add mask and more types to accumulate's performance tests
The `codec_tag` is only available when opening a file from disk. If `AVStream` is a network stream then `fourcc` must be obtained using `codec_id`. I have tested the following scenarios:
1) Open a `.mp4` file and verify that `codec_tag` is returned (old behavior)
2) Open a `rtsp` stream and verify that `codec_fourcc` is returned (Tested with a MJPEG, H264 and H265 stream)
* hide use of std::mutex from /clr compilation under Visual Studio
C++11 <mutex> is not available when compiling with /clr under Visual Studio, thus opencv cannot be easily included.
It is fixed by making a CEEMutex wrapper class, around an opaque implementation using std::mutex internally
* fixed compilation outside of Visual Studio
fixed compilation outside of Visual Studio by avoiding some macros
* fixed indentation, prepare getting rid of CEEMutex/CEELockGuard
fixed indentation
After discussion, CEEMutex and CEELockGuard can be totally removed, letting the developer in a /clr context to provide his own implementation
* remove CEEMutex/CEELockGuard
* Update G-API code base to 27-Sep-18
Changes mostly improve standalone build support
* G-API code base update 28-09-2018
* Windows/Documentation warnings should be fixed
* Fixed stability issues in Fluid backend
* Fixed precompiled headers issues in G-API source files
* G-API code base update 28-09-18 EOD
* Fixed several static analysis issues
* Fixed issues found when G-API is built in a standalone mode
Fixes for instrumentation of IPP and OCL (#12637)
* fixed warning about re-declaring variable when both IPP and instrumentation are enabled
* fixed segfault when no funName provided
* compilation fixed when both OCL and instrumentation are enabled
- avoid updating of input image during equalizeHist() call
- avoid for() with double variable (use 'int' instead)
- more CV_Check*() macros
- use Mat_<T>, Matx
- static for local variables
* G-API Initial code upload
* Update G-API code base to Sep-24-2018
* The majority of OpenCV buildbot problems was addressed
* Update G-API code base to 24-Sep-18 EOD
* G-API code base update 25-Sep-2018
* Linux warnings should be resolved
* Documentation build should become green
* Number of Windows warnings should be reduced
* Update G-API code base to 25-Sep-18 EOD
* ARMv7 build issue should be resolved
* ADE is bumped to latest version and should fix Clang builds for macOS/iOS
* Remaining Windows warnings should be resolved
* New Linux32 / ARMv7 warnings should be resolved
* G-API code base update 25-Sep-2018-EOD2
* Final Windows warnings should be resolved now
* G-API code base update 26-Sep-2018
* Fixed issues with precompiled headers in module and its tests
* Remove isIntel check from deep learning layers
* Remove fp16->fp32 fallbacks where it's not necessary
* Fix Kernel::run to prevent localsize > globalsize
findChessboardCornersSB: speed improvements (#12615)
* chessboard: fix do not modify const image
* chessboard: speed up scale space using parallel_for
* chessboard: small improvements
* chessboard: speed up board growing using parallel_for
* chessboard: add flags for tuning detection
* chessboard: fix compiler warnings
* chessborad: change flag name to CALIB_CB_EXHAUSTIVE
This also fixes a typo
* chessboard: fix const ref + remove to_string
Before this fix, the code would fail if only standard deviations of
extrinsic parameters are requested. While standard deviations matrix
should be computed if any set of standard deviations is requested. A
variable is added to represent this case.
Support asymmetric padding in pooling layer (#12519)
* Add Inception_V1 support in ONNX
* Add asymmetric padding in OpenCL and Inference engine
* Refactoring
* add new chessboard detector
The chessboar detector is based on the paper.
Accurate Detection and Localization of Checkerboard Corners for
Calibration Alexander Duda, Udo Frese
British Machine Vision Conference, o.A., 2018.
It utilizes point symmetry of checkerboard corners in combination with a
localized Radon transform approximated by box filters to achieve high
performance even on large images. Here, tests have shown that the
ability to localize checkerboard corners is close to the theoretical
limit of 1/100 of a pixel while being considerably less sensitive
to image noise than standard methods.
* chessboard: add reference to bibtex file
* chessboard: add dependency to opencv_flann
* fix: test chesscorners. It is valid to return an empty list
In case no chessboard was detected it should be valid for the detector
to return an empty list.
For simplifcation, it should be allowed to return any number of corners
if they are flagged as not found.
* fix: opencv.bib remove empty lines
* fix: doc findChessboardCorners replace cvSize with cv::Size
* chessboard tests: factor out logic selecting detector
* chessboard: add unit test for findChessboardCorners2
This is includes a new chessboard generator which supports subpix
corners with high accuracy by wrapping an optimal chessboard using
wrapPerspective.
* fix: chessboard unit test - overwrite of default parameter flag of findCirclesGrid
* chessboard: remove trailing whitespace
* chessboard: fix debug drawing
* chessboard: fix some issues during code review
* chessboard: normalize asymmetric chessboard
* chessboard: fix float double warning
* remove trailing whitespace
* chessboards: fix compiler warnings
* chessboards: fix compiler warnings
* checkerboard: some performance improvements
* chessboard: remove NULL macros for language bindinges from internal headers
* chessboard: shorten license terms
* chessboard: remove unused internal method
* chessboard: set helper functions to static
* chessboard: fix normalizePoints1D using unshifted points
* chessboard: remove wrongly copied text
* chessboard: use CV_CheckTypeEQ macro
* chessboard: comment all NaN checks
* chessboard: use consistent color conversion
* chessboard: use CheckChannelEQ macro
* chessboard: assume gray color image for internal methods
* chessboard: use std::swap
* chessboard: use Mat.dataend
* chessboard: fix compiler warnings
* chessboard: replace some checks witch CV_CHECK macro
* chessboard: fix comparison function for partial sort
* chessboard: small cleanup
* chessboard: use short license header
* chessboard: rename findChessboard2 to findChessboardSB
* chessboard: fix type in unit test
Feature/region layer batch mode (#12249)
* Add batch mode for Darknet networks.
Swap variables in test_darknet.
Adapt reorg layer to batch mode.
Adapt region layer.
Add OpenCL implementation.
Remove trailing whitespace.
Bugifx reorg opencl implementation.
Fix bug in OpenCL reorg.
Fix modulo bug.
Fix bug.
Reorg openCL.
Restore reorg layer opencl code.
OpenCl fix.
Work on openCL reorg.
Remove whitespace.
Fix openCL region layer implementation.
Fix bug.
Fix softmax region opencl bug.
Fix opencl bug.
Fix openCL bug.
Update aff_trans.cpp
When the fullAffine parameter is set to false, the estimateRigidTransform function maybe return empty, then the _localAffineEstimate function will be called, but the bug in it will result in incorrect results.
core(libva): support YV12 too
Added to CPU path only.
OpenCL code path still expects NV12 only (according to Intel OpenCL extension)
cmake: allow to specify own libva paths
via CMake:
- `-DVA_LIBRARIES=/opt/intel/mediasdk/lib64/libva.so.2\;/opt/intel/mediasdk/lib64/libva-drm.so.2`
android: NDK17 support
tested with NDK 17b (17.1.4828580)
Enable more deep learning tests using Intel's Inference Engine backend
ts: don't pass NULL for std::string() constructor
openvino: use 2018R3 defines
experimental version++
OpenCV version++
OpenCV 3.4.3
OpenCV version '-openvino'
openvino: use 2018R3 defines
Fixed windows build with InferenceEngine
dnn: fix variance setting bug for PriorBoxLayer
- The size of second channel should be size[2] of output tensor,
- The Scalar should be {variance[0], variance[0], variance[0], variance[0]}
for _variance.size() == 1 case.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Fix lifetime of networks which are loaded from Model Optimizer IRs
Adds a small note describing BUILD_opencv_world (#12332)
* Added a mall note describing BUILD_opencv_world cmake option to the Installation in Windows tutorial.
* Made slight changes in BUILD_opencv_world documentation.
* Update windows_install.markdown
improved grammar
Update opengl_interop.cpp
resolves#12307
java: fix LIST_GET macro
fix typo
Added option to fail on missing testdata
Fixed that object_detection.py does not work in python3.
cleanup: IPP Async (IPP_A)
except header file with conversion routines (will be removed in OpenCV 4.0)
imgcodecs: add null pointer check
Include preprocessing nodes to object detection TensorFlow networks (#12211)
* Include preprocessing nodes to object detection TensorFlow networks
* Enable more fusion
* faster_rcnn_resnet50_coco_2018_01_28 test
countNonZero function reworked to use wide universal intrinsics instead of SSE2 intrinsics
resolve#5788
imgcodecs(webp): multiple fixes
- don't reallocate passed 'img' (test fixed - must use IMREAD_UNCHANGED / IMREAD_ANYCOLOR)
- avoid memory DDOS
- avoid reading of whole file during header processing
- avoid data access after allocated buffer during header processing (missing checks)
- use WebPFree() to free allocated buffers (libwebp >= 0.5.0)
- drop unused & undefined `.close()` method
- added checks for channels >= 5 in encoder
ml: fix adjusting K in KNearest (#12358)
dnn(perf): fix and merge Convolution tests
- OpenCL tests didn't run any OpenCL kernels
- use real configuration from existed models (the first 100 cases)
- batch size = 1
dnn(test): use dnnBackendsAndTargets() param generator
Bit-exact resize reworked to use wide intrinsics (#12038)
* Bit-exact resize reworked to use wide intrinsics
* Reworked bit-exact resize row data loading
* Added bit-exact resize row data loaders for SIMD256 and SIMD512
* Fixed type punned pointer dereferencing warning
* Reworked loading of source data for SIMD256 and SIMD512 bit-exact resize
Bit-exact GaussianBlur reworked to use wide intrinsics (#12073)
* Bit-exact GaussianBlur reworked to use wide intrinsics
* Added v_mul_hi universal intrinsic
* Removed custom SSE2 branch from bit-exact GaussianBlur
* Removed loop unrolling for gaussianBlur horizontal smoothing
doc: fix English gramma in tutorial out-of-focus-deblur filter (#12214)
* doc: fix English gramma in tutorial out-of-focus-deblur filter
* Update out_of_focus_deblur_filter.markdown
slightly modified one sentence
doc: add new tutorial motion deblur filter (#12215)
* doc: add new tutorial motion deblur filter
* Update motion_deblur_filter.markdown
a few minor changes
Replace Slice layer to Crop in Faster-RCNN networks from Caffe
js: use generated list of OpenCV headers
- replaces hand-written list
imgcodecs(webp): use safe cast to size_t on Win32
* Put Version status back to -dev.
follow the common codestyle
Exclude some target engines.
Refactor formulas.
Refactor code.
* Remove unused variable.
* Remove inference engine check for yolov2.
* Alter darknet batch tests to test with two different images.
* Add yolov3 second image GT.
* Fix bug.
* Fix bug.
* Add second test.
* Remove comment.
* Add NMS on network level.
* Add helper files to dev.
* syntax fix.
* Fix OD sample.
Fix sample dnn object detection.
Fix NMS boxes bug.
remove trailing whitespace.
Remove debug function.
Change thresholds for opencl tests.
* Adapt score diff and iou diff.
* Alter iouDiffs.
* Add debug messages.
* Adapt iouDiff.
* Fix tests
* Add Squeezenet support in ONNX
* Add AlexNet support in ONNX
* Add Googlenet support in ONNX
* Add CaffeNet and RCNN support in ONNX
* Add VGG16 and VGG16 with batch normalization support in ONNX
* Add RCNN, ZFNet, ResNet18v1 and ResNet50v1 support in ONNX
* Add ResNet101_DUC_HDC
* Add Tiny Yolov2
* Add CNN_MNIST, MobileNetv2 and LResNet100 support in ONNX
* Add ONNX models for emotion recognition
* Add DenseNet121 support in ONNX
* Add Inception v1 support in ONNX
* Refactoring
* Fix tests
* Fix tests
* Skip unstable test
* Modify Reshape operation
* added basic support for CV_16F (the new datatype etc.). CV_USRTYPE1 is now equal to CV_16F, which may break some [rarely used] functionality. We'll see
* fixed just introduced bug in norm; reverted errorneous changes in Torch importer (need to find a better solution)
* addressed some issues found during the PR review
* restored the patch to fix some perf test failures
* may be an typo fix
* remove identical branch,may be paste error
* add parentheses around macro parameter
* simplify if condition
* check malloc fail
* change the condition of branch removed by commit 3041502861
* rewrote Mat::convertTo() and convertScaleAbs() to wide universal intrinsics; added always-available and SIMD-optimized FP16<=>FP32 conversion
* fixed compile warnings
* fix some more compile errors
* slightly relaxed accuracy threshold for int->float conversion (since we now do it using single-precision arithmetics, not double-precision)
* fixed compile errors on iOS, Android and in the baseline C++ version (intrin_cpp.hpp)
* trying to fix ARM-neon builds
* trying to fix ARM-neon builds
* trying to fix ARM-neon builds
* trying to fix ARM-neon builds
* trying to fix the custom AVX2 builder test failures (false alarms)
* fixed compile error with CPU_BASELINE=AVX2 on x86; raised tolerance thresholds in a couple of tests
* fixed compile error with CPU_BASELINE=AVX2 on x86; raised tolerance thresholds in a couple of tests
* fixed compile error with CPU_BASELINE=AVX2 on x86; raised tolerance thresholds in a couple of tests
* seemingly disabled false alarm warning in surf.cpp; increased tolerance thresholds in the tests for SolvePnP and in DNN/ENet
* bgr2gray 8u fixed to be in conformance with IPP code
* coefficients fixed so their sum is 32768
* java test for CascadeDetect fixed: equalizeHist added
* Remove a forward method in dnn::Layer
* Add a test
* Fix tests
* Mark multiple dnn::Layer::finalize methods as deprecated
* Replace back dnn's inputBlobs to vector of pointers
* Remove Layer::forward_fallback from CV_OCL_RUN scopes