mirror of
https://github.com/opencv/opencv.git
synced 2025-07-24 14:06:27 +08:00
minor typo corrections to python tutorials
This commit is contained in:
parent
df6728e64c
commit
6b3469268e
@ -81,8 +81,8 @@ points.
|
||||
Now an orientation is assigned to each keypoint to achieve invariance to image rotation. A
|
||||
neighbourhood is taken around the keypoint location depending on the scale, and the gradient
|
||||
magnitude and direction is calculated in that region. An orientation histogram with 36 bins covering
|
||||
360 degrees is created. (It is weighted by gradient magnitude and gaussian-weighted circular window
|
||||
with \f$\sigma\f$ equal to 1.5 times the scale of keypoint. The highest peak in the histogram is taken
|
||||
360 degrees is created (It is weighted by gradient magnitude and gaussian-weighted circular window
|
||||
with \f$\sigma\f$ equal to 1.5 times the scale of keypoint). The highest peak in the histogram is taken
|
||||
and any peak above 80% of it is also considered to calculate the orientation. It creates keypoints
|
||||
with same location and scale, but different directions. It contribute to stability of matching.
|
||||
|
||||
@ -99,7 +99,7 @@ illumination changes, rotation etc.
|
||||
Keypoints between two images are matched by identifying their nearest neighbours. But in some cases,
|
||||
the second closest-match may be very near to the first. It may happen due to noise or some other
|
||||
reasons. In that case, ratio of closest-distance to second-closest distance is taken. If it is
|
||||
greater than 0.8, they are rejected. It eliminaters around 90% of false matches while discards only
|
||||
greater than 0.8, they are rejected. It eliminates around 90% of false matches while discards only
|
||||
5% correct matches, as per the paper.
|
||||
|
||||
So this is a summary of SIFT algorithm. For more details and understanding, reading the original
|
||||
|
@ -20,7 +20,7 @@ extract the moving foreground from static background.
|
||||
If you have an image of background alone, like an image of the room without visitors, image of the road
|
||||
without vehicles etc, it is an easy job. Just subtract the new image from the background. You get
|
||||
the foreground objects alone. But in most of the cases, you may not have such an image, so we need
|
||||
to extract the background from whatever images we have. It become more complicated when there are
|
||||
to extract the background from whatever images we have. It becomes more complicated when there are
|
||||
shadows of the vehicles. Since shadows also move, simple subtraction will mark that also as
|
||||
foreground. It complicates things.
|
||||
|
||||
@ -72,7 +72,7 @@ papers by Z.Zivkovic, "Improved adaptive Gaussian mixture model for background s
|
||||
and "Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction"
|
||||
in 2006. One important feature of this algorithm is that it selects the appropriate number of
|
||||
gaussian distribution for each pixel. (Remember, in last case, we took a K gaussian distributions
|
||||
throughout the algorithm). It provides better adaptibility to varying scenes due illumination
|
||||
throughout the algorithm). It provides better adaptability to varying scenes due illumination
|
||||
changes etc.
|
||||
|
||||
As in previous case, we have to create a background subtractor object. Here, you have an option of
|
||||
|
@ -75,10 +75,10 @@ solution.
|
||||
( Check similarity of inverse matrix with Harris corner detector. It denotes that corners are better
|
||||
points to be tracked.)
|
||||
|
||||
So from user point of view, idea is simple, we give some points to track, we receive the optical
|
||||
So from the user point of view, the idea is simple, we give some points to track, we receive the optical
|
||||
flow vectors of those points. But again there are some problems. Until now, we were dealing with
|
||||
small motions. So it fails when there is large motion. So again we go for pyramids. When we go up in
|
||||
the pyramid, small motions are removed and large motions becomes small motions. So applying
|
||||
small motions, so it fails when there is a large motion. To deal with this we use pyramids. When we go up in
|
||||
the pyramid, small motions are removed and large motions become small motions. So by applying
|
||||
Lucas-Kanade there, we get optical flow along with the scale.
|
||||
|
||||
Lucas-Kanade Optical Flow in OpenCV
|
||||
|
Loading…
Reference in New Issue
Block a user