* GSoC 2016 - Adding toggle files to be used by tutorials.
Add a toggle option for tutorials.
* adds a button on the HTML tutorial pages to switch between blocks
* the default option is for languages: one can write a block
for C++ and another one for Python without re-writing the tutorial
Add aliases to the doxyfile.
* adding alises to make a link to previous and next tutorial.
* adding alias to specify the toggle options in the tutorials index.
* adding alias to add a youtube video directly from link.
Add a sample tutorial (mat_mask_opertaions) using the developed aliases:
* youtube alias
* previous and next tutorial alias
* buttons
* languages info for tutorial table of content
* code referances with snippets (and associated sample code files)
* Removing the automatic ordering.
Adding specific toggles for cpp, java and python.
Move all the code to the footer / header and Doxyfile.
Updating documentation.
modules/objectdetect/src/detection_based_tracker.cpp: made unique_lock<mutex> local to each function
samples/cpp/dbt_face_detection.cpp: fixed warnings on loop in Visual Studio
[GSOC] New camera model for stitching pipeline
* implement estimateAffine2D
estimates affine transformation using robust RANSAC method.
* uses RANSAC framework in calib3d
* includes accuracy test
* uses SVD decomposition for solving 3 point equation
* implement estimateAffinePartial2D
estimates limited affine transformation
* includes accuracy test
* stitching: add affine matcher
initial version of matcher that estimates affine transformation
* stitching: added affine transform estimator
initial version of estimator that simply chain transformations in homogeneous coordinates
* calib3d: rename estimateAffine3D test
test Calib3d_EstimateAffineTransform rename to Calib3d_EstimateAffine3D. This is more descriptive and prevents confusion with estimateAffine2D tests.
* added perf test for estimateAffine functions
tests both estimateAffine2D and estimateAffinePartial2D
* calib3d: compare error in square in estimateAffine2D
* incorporates fix from #6768
* rerun affine estimation on inliers
* stitching: new API for parallel feature finding
due to ABI breakage new functionality is added to `FeaturesFinder2`, `SurfFeaturesFinder2` and `OrbFeaturesFinder2`
* stitching: add tests for parallel feature find API
* perf test (about linear speed up)
* accuracy test compares results with serial version
* stitching: use dynamic_cast to overcome ABI issues
adding parallel API to FeaturesFinder breaks ABI. This commit uses dynamic_cast and hardcodes thread-safe finders to avoid breaking ABI.
This should be replaced by proper method similar to FeaturesMatcher on next ABI break.
* use estimateAffinePartial2D in AffineBestOf2NearestMatcher
* add constructor to AffineBestOf2NearestMatcher
* allows to choose between full affine transform and partial affine transform. Other params are the as for BestOf2NearestMatcher
* added protected field
* samples: stitching_detailed support affine estimator and matcher
* added new flags to choose matcher and estimator
* stitching: rework affine matcher
represent transformation in homogeneous coordinates
affine matcher: remove duplicite code
rework flow to get rid of duplicite code
affine matcher: do not center points to (0, 0)
it is not needed for affine model. it should not affect estimation in any way.
affine matcher: remove unneeded cv namespacing
* stitching: add stub bundle adjuster
* adds stub bundle adjuster that does nothing
* can be used in place of standard bundle adjusters to omit bundle adjusting step
* samples: stitching detailed, support no budle adjust
* uses new NoBundleAdjuster
* added affine warper
* uses R to get whole affine transformation and propagates rotation and translation to plane warper
* add affine warper factory class
* affine warper: compensate transformation
* samples: stitching_detailed add support for affine warper
* add Stitcher::create method
this method follows similar constructor methods and returns smart pointer. This allows constructing Stitcher according to OpenCV guidelines.
* supports multiple stitcher configurations (PANORAMA and SCANS) for convenient setup
* returns cv::Ptr
* stitcher: dynamicaly determine correct estimator
we need to use affine estimator for affine matcher
* preserves ABI (but add hints for ABI 4)
* uses dynamic_cast hack to inject correct estimator
* sample stitching: add support for multiple modes
shows how to use different configurations of stitcher easily (panorama stitching and scans affine model)
* stitcher: find features in parallel
use new FeatureFinder API to find features in parallel. Parallelized using TBB.
* stitching: disable parallel feature finding for OCL
it does not bring much speedup to run features finder in parallel when OpenCL is enabled, because finder needs to wait for OCL device.
Also, currently ORB is not thread-safe when OCL is enabled.
* stitching: move matcher tests
move matchers tests perf_stich.cpp -> perf_matchers.cpp
* stitching: add affine stiching integration test
test basic affine stitching (SCANS mode of stitcher) with images that have only translation between them
* enable surf for stitching tests
stitching.b12 test was failing with surf
investigated the issue, surf is producing good result. Transformation is only slightly different from ORB, so that resulting pano does not exactly match ORB's result. That caused sanity check to fail.
* added size checks similar to other tests
* sanity check will be applied only for ORB
* stitching: fix wrong estimator choice
if case was exactly wrong, estimators were chosen wrong
added logging for estimated transformation
* enable surf for matchers stitching tests
* enable SURF
* rework sanity checking. Check estimated transform instead of matches. Est. transform should be more stable and comparable between SURF and ORB.
* remove regression checking for VectorFeatures tests. It has a lot if data andtest is the same as previous except it test different vector size for performance, so sanity checking does not add any value here. Added basic sanity asserts instead.
* stitching tests: allow relative error for transform
* allows .01 relative error for estimated homography sanity check in stitching matchers tests
* fix VS warning
stitching tests: increase relative error
increase relative error to make it pass on all platforms (results are still good).
stitching test: allow bigger relative error
transformation can differ in small values (with small absolute difference, but large relative difference). transformation output still looks usable for all platforms. This difference affects only mac and windows, linux passes fine with small difference.
* stitching: add tests for affine matcher
uses s1, s2 images. added also new sanity data.
* stitching tests: use different data for matchers tests
this data should yeild more stable transformation (it has much more matches, especially for surf). Sanity data regenerated.
* stitching test: rework tests for matchers
* separated rotation and translations as they are different by scale.
* use appropriate absolute error for them separately. (relative error does not work for values near zero.)
* stitching: fix affine warper compensation
calculation of rotation and translation extracted for plane warper was wrong
* stitching test: enable surf for opencl integration tests
* enable SURF with correct guard (HAVE_OPENCV_XFEATURES2D)
* add OPENCL guard and correct namespace as usual for opencl tests
* stitching: add ocl accuracy test for affine warper
test consistent results with ocl on and off
* stitching: add affine warper ocl perf test
add affine warper to existing warper perf tests. Added new sanity data.
* stitching: do not overwrite inliers in affine matcher
* estimation is run second time on inliers only, inliers produces in second run will not be therefore correct for all matches
* calib3d: add Levenberg–Marquardt refining to estimateAffine2D* functions
this adds affine Levenberg–Marquardt refining to estimateAffine2D functions similar to what is done in findHomography.
implements Levenberg–Marquardt refinig for both full affine and partial affine transformations.
* stitching: remove reestimation step in affine matcher
reestimation step is not needed. estimateAffine2D* functions are running their own reestimation on inliers using the Levenberg-Marquardt algorithm, which is better than simply rerunning RANSAC on inliers.
* implement partial affine bundle adjuster
bundle adjuster that expect affine transform with 4DOF. Refines parameters for all cameras together.
stitching: fix bug in BundleAdjusterAffinePartial
* use the invers properly
* use static buffer for invers to speed it up
* samples: add affine bundle adjuster option to stitching_detailed
* add support for using affine bundle adjuster with 4DOF
* improve logging of initial intristics
* sttiching: add affine bundle adjuster test
* fix build warnings
* stitching: increase limit on sanity check
prevents spurious test failures on mac. values are still pretty fine.
* stitching: set affine bundle adjuster for SCANS mode
* fix bug with AffineBestOf2NearestMatcher (we want to select affine partial mode)
* select right bundle adjuster
* stitching: increase error bound for matcher tests
* this prevents failure on mac. tranformation is still ok.
* stitching: implement affine bundle adjuster
* implements affine bundle adjuster that is using full affine transform
* existing test case modified to test both affinePartial an full affine bundle adjuster
* add stitching tutorial
* show basic usage of stitching api (Stitcher class)
* stitching: add more integration test for affine stitching
* added new datasets to existing testcase
* removed unused include
* calib3d: move `haveCollinearPoints` to common header
* added comment to make that this also checks too close points
* calib3d: redone checkSubset for estimateAffine* callback
* use common function to check collinearity
* this also ensures that point will not be too close to each other
* calib3d: change estimateAffine* functions API
* more similar to `findHomography`, `findFundamentalMat`, `findEssentialMat` and similar
* follows standard recommended semantic INPUTS, OUTPUTS, FLAGS
* allows to disable refining
* supported LMEDS robust method (tests yet to come) along with RANSAC
* extended docs with some tips
* calib3d: rewrite estimateAffine2D test
* rewrite in googletest style
* parametrize to test both robust methods (RANSAC and LMEDS)
* get rid of boilerplate
* calib3d: rework estimateAffinePartial2D test
* rework in googletest style
* add testing for LMEDS
* calib3d: rework estimateAffine*2D perf test
* test for LMEDS speed
* test with/without Levenberg-Marquart
* remove sanity checking (this is covered by accuracy tests)
* calib3d: improve estimateAffine*2D tests
* test transformations in loop
* improves test by testing more potential transformations
* calib3d: rewrite kernels for estimateAffine*2D functions
* use analytical solution instead of SVD
* this version is faster especially for smaller amount of points
* calib3d: tune up perf of estimateAffine*2D functions
* avoid copying inliers
* avoid converting input points if not necessary
* check only `from` point for collinearity, as `to` does not affect stability of transform
* tutorials: add commands examples to stitching tutorials
* add some examples how to run stitcher sample code
* mention stitching_detailed.cpp
* calib3d: change computeError for estimateAffine*2D
* do error computing in floats instead of doubles
this have required precision + we were storing the result in float anyway. This make code faster and allows auto-vectorization by smart compilers.
* documentation: mention estimateAffine*2D function
* refer to new functions on appropriate places
* prefer estimateAffine*2D over estimateRigidTransform
* stitching: add camera models documentations
* mention camera models in module documentation to give user a better overview and reduce confusion
I noticed that I missed the fact that `cimg` is used in the second `imshow()` call. Changed the scope of the second function call to be within the if-statement. Otherwise in cases where have not been detected the second `imshow()` will attempt to use `cimg` which will be empty leading to an error.
In the C++ equivalent of this example a check is made whether the vector (here in Python we have a list) actually has any lines in it that is whether the Hough lines function has managed to find any in the given image. This check is missing for the Python example and if no lines are found the application breaks.
In the C++ equivalent of this example a check is made whether the vector (here in Python we have a list) actually has any circles in it that is whether the Hough circles function has managed to find any in the given image. This check is missing for the Python example and if no circles are found the application breaks.
All of these: (performance) Prefer prefix ++/-- operators for non-primitive types.
[modules/calib3d/src/fundam.cpp:1049] -> [modules/calib3d/src/fundam.cpp:1049]: (style) Same expression on both sides of '&&'.
Commits:
67fe57a add fixed video
db0ae2c Restore 2.4 source branch for bug fix 6317.
97ac59c Fix a memory leak indirectly caused by cvDestroyWindow
eb40afa Add a workaround for FFmpeg's color conversion accessing past the end of the buffer
421fcf9 Rearrange CvVideoWriter_FFMPEG::writeFrame for better readability
912592d Remove "INSTALL_NAME_DIR lib" target property
bb1c2d7 fix bug on border at pyrUp
1. Following condition is True on each iteration becuase -1 % 0xFF is 255 not -1
code = cv2.waitKey(100) % 0x100
if code != -1:
break
this were resetting point position on each cycle not on key press as intended
2. Previous small bug were masking serious bug with matrix operation on matrices of incorrect size.
As the result on 2nd iteration of internal cycle program has crushed.
I have fixed it too, matrix operation was taken from examples/cpp/kalman.cpp where it looks like
randn( processNoise, Scalar(0), Scalar::all(sqrt(KF.processNoiseCov.at<float>(0, 0))));
which is something totally different from previous code here.
Example behave as it should now, i.e. point moving by circle trajectory as in C++ example.
Without fixes after 68 line (img1_filename = parser.get<std::string>(0);) OpenCV Error:
./stereo_matching im0.png im1.png --max-disparity=16 --blocksize=17
OpenCV Error: Bad argument (undeclared position 0 requested) in getByIndex, file /home/entodi/opencv/modules/core/src/command_line_parser.cpp, line 169
terminate called after throwing an instance of 'cv::Exception'
what(): /home/entodi/opencv/modules/core/src/command_line_parser.cpp:169: error: (-5) undeclared position 0 requested in function getByIndex
PR #2968: cce2d998578f9c
Fixed bug which caused crash of GPU version of feature matcher in stitcher
The bug caused crash of GPU version of feature matcher in stitcher when
we use ORB features.
PR #3236: 5947519
Check sure that we're not already below required leaf false alarm rate before continuing to get negative samples.
PR #3190
fix blobdetector
PR #3562 (part): 82bd82e
TBB updated to 4.3u2. Fix for aarch64 support.
PR #3604 (part): 091c7a3
OpenGL interop sample reworked not ot use cvconfig.h
PR #3792: afdf319
Add -L for CUDA libs path to pkg-config
Add all dirs from CUDA_LIBS_PATH as -L linker options to
OPENCV_LINKER_LIBS. These will end up in opencv.pc.
PR #3893: 122b9f8
Turn ocv_convert_to_lib_name into a function
PR #5490: ec5244a
fixed memory leak in findHomography tests
PR #5491: 0d5b739
delete video readers
PR #5574
PR #5202
Adding None as outImage in pos3 of cv2.drawKeypoints. Fixes bug that throws 'TypeError: Required argument 'outImage' (pos 3) not found' on left mouse button click
now it builds by the command:
`cmake.exe -Wno-dev -GNinja -DCMAKE_MAKE_PROGRAM="path\to\ninja\ninja.exe" -DCMAKE_TOOLCHAIN_FILE=../opencv3/platforms/android/android.toolchain.cmake -DANDROID_ABI="armeabi-v7a with NEON" -DANDROID_SDK_TARGET=21 -DANDROID_NATIVE_API_LEVEL=14 -DCMAKE_BUILD_WITH_INSTALL_RPATH=ON -DBUILD_ANDROID_EXAMPLES=ON -DINSTALL_ANDROID_EXAMPLES=ON -DWITH_OPENCL=YES -DANDROID_OPENCL_SDK=path\to\OpenCL ../opencv`
Previously, there's no way to the user see the found corners, i've changed that.
In a cout, are write that: "average reprojection err = "
But it isn't a "reprojection error" at all, it is a mean of each EPIPOLAR error, wich occur when the product x' * F * x is not equal to zero.
(x and x' are the same points in the right and left scene)
(the RMS that explain the average absolute reprojection error is given by the return of the stereoCalibrate() function)
At least, i think it's interesting to initialize the camera matrices before.
Thank you all for this amazing code. Apologize my weak english.