This makes LineSegmentDetector deterministic by using stable_sort for ordering points by norm. Without this change the region growing in LSD is non-determinstic and thus the returned lines are changing between invocations.
This is a replacement for https://github.com/opencv/opencv/pull/23370
In case of huge (and probably invalid) input, make sure we do not
rely only on the while loops for truncation.
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
Fix rect_nfa (lsd)
* Fix missing log_gamma in nfa()
Comparing the nfa function with the function in the binomial_nfa repository (https://github.com/rafael-grompone-von-gioi/binomial_nfa/blob/main/C99/log_binomial_nfa.c#L152), the first log_gamma call is missing.
* Fix rect_nfa pixel index
* Replace std::rotate
* Rename tmp to v_tmp
* Replace auto and std::min_element
* Change slope equality check to int
* Fix left limit check
Usage of imread(): magic number 0, unchecked result
* docs: rewrite 0/1 to IMREAD_GRAYSCALE/IMREAD_COLOR in imread()
* samples, apps: rewrite 0/1 to IMREAD_GRAYSCALE/IMREAD_COLOR in imread()
* tests: rewrite 0/1 to IMREAD_GRAYSCALE/IMREAD_COLOR in imread()
* doc/py_tutorials: check imread() result
In some situations the last value was missing from the discrete theta
values. Now, the last value is chosen such that it is close to the
user-provided maximum theta, while the distance to pi remains always
at least theta_step/2. This should avoid duplicate detections.
A better way would probably be to use max_theta as is and adjust the
resolution (theta_step) instead, such that the discretization would
always be uniform (in a circular sense) when full angle range is used.
It's not clear how ranges argument should be used in the overload of
calcHist that accepts std::vector. The main overload uses array of
arrays there, while std::vector overload uses a plain array. The code
interprets the vector as a flattened array and rebuilds array of arrays
from it. This is not obvious interpretation, so documentation has been
added to explain the expected usage.
Fixed out-of-bounds read in parallel version of ippGaussianBlur()
* Fixed out-of-memory read in parallel version of ippGaussianBlur()
* Fixed check
* Revert changes in CMakeLists.txt
Fixed threshold(THRESH_TOZERO) at imgproc(IPP)
* Fixed#16085: imgproc(IPP): wrong result from threshold(THRESH_TOZERO)
* 1. Added test cases with float where all bits of mantissa equal 1, min and max float as inputs
2. Used nextafterf instead of cast to hex
* Used float value in test instead of hex and casts
* Changed input value in test
When computing:
t1 = (bayer[1] + bayer[bayer_step] + bayer[bayer_step+2] + bayer[bayer_step*2+1])*G2Y;
there is a T (unsigned short or char) multiplied by an int which can overflow.
Then again, it is stored to t1 which is unsigned so the overflow disappears.
Keeping all unsigned is safer.