Commit Graph

122 Commits

Author SHA1 Message Date
_Burnside
6c39fbc33f
Merge pull request from Octopus136:4.x
Make \epsilon parameter accessible in VariationalRefinement 

Resolves 

I believe this is necessary to expose \epsilon parameter.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
2024-01-17 10:20:03 +03:00
Sean McBride
5fb3869775
Merge pull request from seanm:misc-warnings
* Fixed clang -Wnewline-eof warnings
* Fixed all trivial clang -Wextra-semi and -Wc++98-compat-extra-semi warnings
* Removed trailing semi from various macros
* Fixed various -Wunused-macros warnings
* Fixed some trivial -Wdocumentation warnings
* Fixed some -Wdocumentation-deprecated-sync warnings
* Fixed incorrect indentation
* Suppressed some clang warnings in 3rd party code
* Fixed QRCodeEncoder::Params documentation.

---------

Co-authored-by: Alexander Smorkalov <alexander.smorkalov@xperience.ai>
2023-10-06 13:33:21 +03:00
lpylpy0514
70d7e83dca
Merge pull request from lpylpy0514:4.x
VIT track(gsoc realtime object tracking model) 

Vit tracker(vision transformer tracker) is a much better model for real-time object tracking. Vit tracker can achieve speeds exceeding nanotrack by 20% in single-threaded mode with ARM chip, and the advantage becomes even more pronounced in multi-threaded mode. In addition, on the dataset, vit tracker demonstrates better performance compared to nanotrack. Moreover, vit trackerprovides confidence values during the tracking process, which can be used to determine if the tracking is currently lost.
opencv_zoo: https://github.com/opencv/opencv_zoo/pull/194
opencv_extra: [https://github.com/opencv/opencv_extra/pull/1088](https://github.com/opencv/opencv_extra/pull/1088)

# Performance comparison is as follows:
NOTE: The speed below is tested by **onnxruntime** because opencv has poor support for the transformer architecture for now.

ONNX speed test on ARM platform(apple M2)(ms):
| thread nums | 1| 2| 3| 4|
|--------|--------|--------|--------|--------|
| nanotrack| 5.25| 4.86| 4.72| 4.49|
| vit tracker| 4.18| 2.41| 1.97| **1.46 (3X)**|

ONNX speed test on x86 platform(intel i3 10105)(ms):
| thread nums | 1| 2| 3| 4|
|--------|--------|--------|--------|--------|
| nanotrack|3.20|2.75|2.46|2.55|
| vit tracker|3.84|2.37|2.10|2.01|

opencv speed test on x86 platform(intel i3 10105)(ms):
| thread nums | 1| 2| 3| 4|
|--------|--------|--------|--------|--------|
| vit tracker|31.3|31.4|31.4|31.4|

preformance test on lasot dataset(AUC is the most important data. Higher AUC means better tracker):

|LASOT | AUC| P| Pnorm|
|--------|--------|--------|--------|
| nanotrack| 46.8| 45.0| 43.3|
| vit tracker| 48.6| 44.8| 54.7|

[https://youtu.be/MJiPnu1ZQRI](https://youtu.be/MJiPnu1ZQRI)
 In target tracking tasks, the score is an important indicator that can indicate whether the current target is lost. In the video, vit tracker can track the target and display the current score in the upper left corner of the video. When the target is lost, the score drops significantly. While nanotrack will only return 0.9 score in any situation, so that we cannot determine whether the target is lost.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
2023-09-19 15:36:38 +03:00
zihaomu
8e6aae0d7a Add spaces to make links clickable. 2022-12-24 15:26:42 +08:00
zihaomu
7dbb125a34 add nanotrack v2 at regression test. 2022-12-14 14:41:49 +08:00
Zihao Mu
cb8f1dca3b
Merge pull request from zihaomu:nanotrack
[teset data in opencv_extra](https://github.com/opencv/opencv_extra/pull/1016)

NanoTrack is an extremely lightweight and fast object-tracking model. 
The total size is **1.1 MB**.
And the FPS on M1 chip is **150**, on Raspberry Pi 4 is about **30**. (Float32 CPU only)

With this model, many users can run object tracking on the edge device.

The author of NanoTrack is @HonglinChu.
The original repo is https://github.com/HonglinChu/NanoTrack.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
2022-12-06 08:54:32 +03:00
Anna Prigarina
478663b08c
Merge pull request from APrigarina:tracking_api
Tracking API: added DaSiamRPN tracker

* added dasiamrpn tracker

* dasiamrpn: add test, rewrite sample

* change python samples

* fix tests

* fix params
2021-05-31 20:23:37 +00:00
Alexander Alekhin
81f0b0e0dc cmake: fix tracking detail headers 2021-04-01 12:21:18 +00:00
LaurentBerger
d1c04af603 relative to https://forum.opencv.org/t/cv2-findtransformecc-sometimes-missing-defaults/1870 2021-03-01 17:32:56 +01:00
Alexander Alekhin
aab6362705
Merge pull request from alalek:video_tracking_api
Tracking API: move to video/tracking.hpp

* video(tracking): moved code from opencv_contrib/tracking module

- Tracker API
- MIL, GOTURN trackers
- applied clang-format

* video(tracking): cleanup unused code

* samples: add tracker.py sample

* video(tracking): avoid div by zero

* static analyzer
2020-11-18 11:04:15 +00:00
Alexander Alekhin
bfcc136dc7 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2020-04-21 21:32:51 +00:00
Ganesh Kathiresan
0be2c7018b Formula Fixes for 3.4 branch
Foumula fix 1

Foumula fix 2

Foumula fix 3

Foumula fix 4

Foumula fix 5

Foumula fix 8
2020-04-21 19:23:23 +05:30
Alexander Alekhin
c3cf35ab63 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-02-26 17:34:42 +03:00
AKAMath
4c94804bb0 Merge pull request from amithjkamath:test
New computeECC function, and updated findTransformECC function to make gaussian filtering optional ()

* fix for https://github.com/opencv/opencv/issues/12432 with doc and tests

* Added doc string for new parameter.

* Fixes suggested by Alalek for getting around ABI incompatibility.

* Update to docstring, to remove parameter that isn't relevant.

* More updates based on Alalek's usggestions.
2019-02-22 18:36:40 +03:00
klemens
997b7b18af spelling fixes 2019-02-09 22:29:54 +01:00
Alexander Alekhin
a574788e89 move legacy C-API constants into separate files 2018-11-17 23:47:51 +00:00
Vadim Pisarevsky
96bf26611e Merge pull request from vpisarev:shuffle_optflow_algos
* moved DIS optical flow from opencv_contrib to opencv, moved TVL1 from opencv to opencv_contrib

* fixed compile warning

* TVL1 optical flow example moved to opencv_contrib
2018-11-09 17:52:06 +03:00
Vadim Pisarevsky
11eafca3e2
removed C API in the following modules: photo, video, imgcodecs, videoio ()
* removed C API in the following modules: photo, video, imgcodecs, videoio

* trying to fix various compile errors and warnings on Windows and Linux

* continue to fix compile errors and warnings

* continue to fix compile errors, warnings, as well as the test failures

* trying to resolve compile warnings on Android

* Update cap_dc1394_v2.cpp

fix warning from the new GCC
2018-11-09 00:52:09 +03:00
Jiri Horner
49283ec035 Merge pull request from hrnr:video_remove_ransac
* video: remove duplicate RANSAC code

* remove RANSAC code video module. The module now uses RANSAC estimators from calib3d.
* deprecate estimateRigidTransform

* replace internal usage of deprecated estimateRigidTransform

* remove from wrappers
* replace usage in shape module. shape module now links to calib3d instead of video module.
* reprecate also C API version

* remove cvEstimateRigidTransform

* supress deprecated warnings in estimateRigidTransform test

* the function is now deprecated
2018-08-21 17:08:27 +03:00
Alexander Alekhin
7d4bb9428b Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-08-20 19:30:18 +03:00
Suleyman TURKMEN
c61bc3a0cb Update documentation and samples 2018-08-17 14:21:29 +03:00
Alexander Alekhin
3165baa1f1 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-07-02 14:58:29 +03:00
branka-plateiq
34ad9b8a42 Pass RANSAC parameters as function input ()
* Pass RANSAC parameters as function input

* Clean up unnecessary code

* Keep the original function signature

* Clean up based on PR comments

Replace array with vector.

Correct naming convention for input variables.

Add checks on input variables.

* Use vector instead of array for dynamic size

* Revert change.

* Use dynamic array

* Fix wrong syntax in array allocation

* Undo change

* Fix variable name

* Use vector and not array

* fixed compile warning on Windows
2018-06-26 16:40:28 +03:00
Alexander Alekhin
2b2fa58f97 next: drop DISABLE_OPENCV_24_COMPATIBILITY 2018-04-10 18:09:53 +03:00
Alexander Alekhin
1ca7ae9630 video: apply CV_OVERRIDE/CV_FINAL 2018-03-28 18:43:27 +03:00
luz.paz
d05714995c Misc. modules/ cont. pt2
Found via `codespell`
2018-02-13 11:28:11 -05:00
Suleyman TURKMEN
89480801b8 some improvements on tutorials 2017-07-29 20:08:19 +03:00
Tony Lian
c8783f3e23 Merge pull request from TonyLianLong:master
Remove unnecessary Non-ASCII characters from source code ()

* Remove unnecessary Non-ASCII characters from source code

Remove unnecessary Non-ASCII characters and replace them with ASCII
characters

* Remove dashes in the @param statement

Remove dashes and place single space in the @param statement to keep
coding style

* misc: more fixes for non-ASCII symbols

* misc: fix non-ASCII symbol in CMake file
2017-07-03 16:14:17 +00:00
Vadim Pisarevsky
affb60093d Merge branch 'master' of https://github.com/MicheleCancilla/opencv into parallel_ccomp 2017-05-24 16:51:18 +03:00
Michele Cancilla
9405c6d206 Improvement of array of equivalences’ upper bound + fix some wrong comments 2017-04-27 12:53:33 +02:00
Utkarsh Sinha
0330934c8b Updating documentation. 2017-04-11 06:42:20 -07:00
Vladislav Sovrasov
4e0351bafd Clarify docs for MOG2::apply 2017-01-26 12:43:41 +03:00
Jiri Horner
c17afe0fab Merge pull request from hrnr:gsoc_all
[GSOC] New camera model for stitching pipeline

* implement estimateAffine2D

estimates affine transformation using robust RANSAC method.

* uses RANSAC framework in calib3d
* includes accuracy test
* uses SVD decomposition for solving 3 point equation

* implement estimateAffinePartial2D

estimates limited affine transformation

* includes accuracy test

* stitching: add affine matcher

initial version of matcher that estimates affine transformation

* stitching: added affine transform estimator

initial version of estimator that simply chain transformations in homogeneous coordinates

* calib3d: rename estimateAffine3D test

test Calib3d_EstimateAffineTransform rename to Calib3d_EstimateAffine3D. This is more descriptive and prevents confusion with estimateAffine2D tests.

* added perf test for estimateAffine functions

tests both estimateAffine2D and estimateAffinePartial2D

* calib3d: compare error in square in estimateAffine2D

* incorporates fix from 

* rerun affine estimation on inliers

* stitching: new API for parallel feature finding

due to ABI breakage new functionality is added to `FeaturesFinder2`, `SurfFeaturesFinder2` and `OrbFeaturesFinder2`

* stitching: add tests for parallel feature find API

* perf test (about linear speed up)
* accuracy test compares results with serial version

* stitching: use dynamic_cast to overcome ABI issues

adding parallel API to FeaturesFinder breaks ABI. This commit uses dynamic_cast and hardcodes thread-safe finders to avoid breaking ABI.

This should be replaced by proper method similar to FeaturesMatcher on next ABI break.

* use estimateAffinePartial2D in AffineBestOf2NearestMatcher

* add constructor to AffineBestOf2NearestMatcher

* allows to choose between full affine transform and partial affine transform. Other params are the as for BestOf2NearestMatcher
* added protected field

* samples: stitching_detailed support affine estimator and matcher

* added new flags to choose matcher and estimator

* stitching: rework affine matcher

represent transformation in homogeneous coordinates

affine matcher: remove duplicite code
rework flow to get rid of duplicite code

affine matcher: do not center points to (0, 0)
it is not needed for affine model. it should not affect estimation in any way.

affine matcher: remove unneeded cv namespacing

* stitching: add stub bundle adjuster

* adds stub bundle adjuster that does nothing
* can be used in place of standard bundle adjusters to omit bundle adjusting step

* samples: stitching detailed, support no budle adjust

* uses new NoBundleAdjuster

* added affine warper

* uses R to get whole affine transformation and propagates rotation and translation to plane warper

* add affine warper factory class

* affine warper: compensate transformation

* samples: stitching_detailed add support for affine warper

* add Stitcher::create method

this method follows similar constructor methods and returns smart pointer. This allows constructing Stitcher according to OpenCV guidelines.

* supports multiple stitcher configurations (PANORAMA and SCANS) for convenient setup
* returns cv::Ptr

* stitcher: dynamicaly determine correct estimator

we need to use affine estimator for affine matcher

* preserves ABI (but add hints for ABI 4)
* uses dynamic_cast hack to inject correct estimator

* sample stitching: add support for multiple modes

shows how to use different configurations of stitcher easily (panorama stitching and scans affine model)

* stitcher: find features in parallel

use new FeatureFinder API to find features in parallel. Parallelized using TBB.

* stitching: disable parallel feature finding for OCL

it does not bring much speedup to run features finder in parallel when OpenCL is enabled, because finder needs to wait for OCL device.

Also, currently ORB is not thread-safe when OCL is enabled.

* stitching: move matcher tests

move matchers tests perf_stich.cpp -> perf_matchers.cpp

* stitching: add affine stiching integration test

test basic affine stitching (SCANS mode of stitcher) with images that have only translation between them

* enable surf for stitching tests

stitching.b12 test was failing with surf

investigated the issue, surf is producing good result. Transformation is only slightly different from ORB, so that resulting pano does not exactly match ORB's result. That caused sanity check to fail.

* added size checks similar to other tests
* sanity check will be applied only for ORB

* stitching: fix wrong estimator choice

if case was exactly wrong, estimators were chosen wrong

added logging for estimated transformation

* enable surf for matchers stitching tests

* enable SURF
* rework sanity checking. Check estimated transform instead of matches. Est. transform should be more stable and comparable between SURF and ORB.
* remove regression checking for VectorFeatures tests. It has a lot if data andtest is the same as previous except it test different vector size for performance, so sanity checking does not add any value here. Added basic sanity asserts instead.

* stitching tests: allow relative error for transform

* allows .01 relative error for estimated homography sanity check in stitching matchers tests
* fix VS warning

stitching tests: increase relative error

increase relative error to make it pass on all platforms (results are still good).

stitching test: allow bigger relative error

transformation can differ in small values (with small absolute difference, but large relative difference). transformation output still looks usable for all platforms. This difference affects only mac and windows, linux passes fine with small difference.

* stitching: add tests for affine matcher

uses s1, s2 images. added also new sanity data.

* stitching tests: use different data for matchers tests

this data should yeild more stable transformation (it has much more matches, especially for surf). Sanity data regenerated.

* stitching test: rework tests for matchers

* separated rotation and translations as they are different by scale.
* use appropriate absolute error for them separately. (relative error does not work for values near zero.)

* stitching: fix affine warper compensation

calculation of rotation and translation extracted for plane warper was wrong

* stitching test: enable surf for opencl integration tests

* enable SURF with correct guard (HAVE_OPENCV_XFEATURES2D)
* add OPENCL guard and correct namespace as usual for opencl tests

* stitching: add ocl accuracy test for affine warper

test consistent results with ocl on and off

* stitching: add affine warper ocl perf test

add affine warper to existing warper perf tests. Added new sanity data.

* stitching: do not overwrite inliers in affine matcher

* estimation is run second time on inliers only, inliers produces in second run will not be therefore correct for all matches

* calib3d: add Levenberg–Marquardt refining to estimateAffine2D* functions

this adds affine Levenberg–Marquardt refining to estimateAffine2D functions similar to what is done in findHomography.

implements Levenberg–Marquardt refinig for both full affine and partial affine transformations.

* stitching: remove reestimation step in affine matcher

reestimation step is not needed. estimateAffine2D* functions are running their own reestimation on inliers using the Levenberg-Marquardt algorithm, which is better than simply rerunning RANSAC on inliers.

* implement partial affine bundle adjuster

bundle adjuster that expect affine transform with 4DOF. Refines parameters for all cameras together.

stitching: fix bug in BundleAdjusterAffinePartial

* use the invers properly
* use static buffer for invers to speed it up

* samples: add affine bundle adjuster option to stitching_detailed

* add support for using affine bundle adjuster with 4DOF
* improve logging of initial intristics

* sttiching: add affine bundle adjuster test

* fix build warnings

* stitching: increase limit on sanity check

prevents spurious test failures on mac. values are still pretty fine.

* stitching: set affine bundle adjuster for SCANS mode

* fix bug with AffineBestOf2NearestMatcher (we want to select affine partial mode)
* select right bundle adjuster

* stitching: increase error bound for matcher tests

* this prevents failure on mac. tranformation is still ok.

* stitching: implement affine bundle adjuster

* implements affine bundle adjuster that is using full affine transform
* existing test case modified to test both affinePartial an full affine bundle adjuster

* add stitching tutorial

* show basic usage of stitching api (Stitcher class)

* stitching: add more integration test for affine stitching

* added new datasets to existing testcase
* removed unused include

* calib3d: move `haveCollinearPoints` to common header

* added comment to make that this also checks too close points

* calib3d: redone checkSubset for estimateAffine* callback

* use common function to check collinearity
* this also ensures that point will not be too close to each other

* calib3d: change estimateAffine* functions API

* more similar to `findHomography`, `findFundamentalMat`, `findEssentialMat` and similar
* follows standard recommended semantic INPUTS, OUTPUTS, FLAGS
* allows to disable refining
* supported LMEDS robust method (tests yet to come) along with RANSAC
* extended docs with some tips

* calib3d: rewrite estimateAffine2D test

* rewrite in googletest style
* parametrize to test both robust methods (RANSAC and LMEDS)
* get rid of boilerplate

* calib3d: rework estimateAffinePartial2D test

* rework in googletest style
* add testing for LMEDS

* calib3d: rework estimateAffine*2D perf test

* test for LMEDS speed
* test with/without Levenberg-Marquart
* remove sanity checking (this is covered by accuracy tests)

* calib3d: improve estimateAffine*2D tests

* test transformations in loop
* improves test by testing more potential transformations

* calib3d: rewrite kernels for estimateAffine*2D functions

* use analytical solution instead of SVD
* this version is faster especially for smaller amount of points

* calib3d: tune up perf of estimateAffine*2D functions

* avoid copying inliers
* avoid converting input points if not necessary
* check only `from` point for collinearity, as `to` does not affect stability of transform

* tutorials: add commands examples to stitching tutorials

* add some examples how to run stitcher sample code
* mention stitching_detailed.cpp

* calib3d: change computeError for estimateAffine*2D

* do error computing in floats instead of doubles

this have required precision + we were storing the result in float anyway. This make code faster and allows auto-vectorization by smart compilers.

* documentation: mention estimateAffine*2D function

* refer to new functions on appropriate places
* prefer estimateAffine*2D over estimateRigidTransform

* stitching: add camera models documentations

* mention camera models in module documentation to give user a better overview and reduce confusion
2016-10-22 19:10:42 +03:00
abratchik
c72fbd7a14 fix for 2016-10-18 09:38:57 +04:00
Alexander Alekhin
1c18b1d245 Merge pull request from souch55:Fixxn 2016-10-01 10:44:56 +00:00
sourin
a34fbf7bb1 Fixed identifiers warns 2016-09-30 15:16:29 +05:30
Vadim Pisarevsky
d62b0bd363 Merge pull request from alcinos:optflow_interface 2016-07-18 15:05:13 +00:00
Philipp Hasper
45bd56e28a rigidTransform: only four DoF
combinations of translation, rotation, and uniform scaling equals four degrees of freedom
2016-06-15 16:41:39 +02:00
alcinos
e22b838af8 Wrap SparseOptFlow class around PyrLK optical flow computation 2016-01-29 01:47:51 +01:00
alcinos
9b70c44f00 Adding interface for Sparse flow computation 2016-01-28 20:03:28 +01:00
alcinos
6e3b90de9b Add static creator for TVL1 optical flow class 2016-01-28 20:03:28 +01:00
alcinos
be4312ec3d Wrap DenseOptFlow class around Farneback optical flow computation 2016-01-28 20:03:27 +01:00
Alexander Alekhin
323e24e3ef change links from samples/python2 to samples/python 2015-12-18 11:00:30 +03:00
Vadim Pisarevsky
78e07d3210 Merge pull request from ellbur:findTransformECC-mask 2015-06-01 20:35:29 +00:00
Martin Ueding
1e00a93f97 Fix spelling 2015-05-13 08:16:39 +02:00
Owen Healy
86fb9f8409 Modify findTransformECC to support a mask of pixels to consider
Tests of the mask are also included.

This is useful for registering a non-square image against a non-square
template.

This also needs to relax a sanity check as per
https://github.com/Itseez/opencv/pull/3851
2015-03-19 22:16:32 -04:00
Maksim Shabunin
7335a40a61 Replaced CV_PURE_PROPERTY macros with corresponding code 2015-03-18 17:23:42 +03:00
Maksim Shabunin
79e8f0680c Updated ml module interfaces and documentation 2015-02-17 11:46:14 +03:00
Maksim Shabunin
da383e65e2 Remove deprecated methods from cv::Algorithm 2015-02-16 15:28:54 +03:00
Maksim Shabunin
c485aee464 Included c-headers for better 2.4 compatibility 2014-12-19 17:05:26 +03:00