mirror of
https://github.com/opencv/opencv.git
synced 2024-11-29 13:47:32 +08:00
Merge pull request #16865 from alalek:doc_fix_wrong_see_also
This commit is contained in:
commit
49d8c0bd52
@ -13,7 +13,7 @@ OpenCV.js saves images as cv.Mat type. We use HTML canvas element to transfer cv
|
||||
or in reverse. The ImageData interface can represent or set the underlying pixel data of an area of a
|
||||
canvas element.
|
||||
|
||||
@sa Please refer to canvas docs for more details.
|
||||
@note Please refer to canvas docs for more details.
|
||||
|
||||
First, create an ImageData obj from canvas:
|
||||
@code{.js}
|
||||
|
@ -83,7 +83,7 @@ use 7x6 grid. (Normally a chess board has 8x8 squares and 7x7 internal corners).
|
||||
corner points and retval which will be True if pattern is obtained. These corners will be placed in
|
||||
an order (from left-to-right, top-to-bottom)
|
||||
|
||||
@sa This function may not be able to find the required pattern in all the images. So, one good option
|
||||
@note This function may not be able to find the required pattern in all the images. So, one good option
|
||||
is to write the code such that, it starts the camera and check each frame for required pattern. Once
|
||||
the pattern is obtained, find the corners and store it in a list. Also, provide some interval before
|
||||
reading next frame so that we can adjust our chess board in different direction. Continue this
|
||||
@ -91,7 +91,7 @@ process until the required number of good patterns are obtained. Even in the exa
|
||||
are not sure how many images out of the 14 given are good. Thus, we must read all the images and take only the good
|
||||
ones.
|
||||
|
||||
@sa Instead of chess board, we can alternatively use a circular grid. In this case, we must use the function
|
||||
@note Instead of chess board, we can alternatively use a circular grid. In this case, we must use the function
|
||||
**cv.findCirclesGrid()** to find the pattern. Fewer images are sufficient to perform camera calibration using a circular grid.
|
||||
|
||||
Once we find the corners, we can increase their accuracy using **cv.cornerSubPix()**. We can also
|
||||
|
@ -132,7 +132,7 @@ A screen-shot of the window will look like this :
|
||||
|
||||
![image](images/matplotlib_screenshot.jpg)
|
||||
|
||||
@sa Plenty of plotting options are available in Matplotlib. Please refer to Matplotlib docs for more
|
||||
@note Plenty of plotting options are available in Matplotlib. Please refer to Matplotlib docs for more
|
||||
details. Some, we will see on the way.
|
||||
|
||||
__warning__
|
||||
|
@ -113,7 +113,7 @@ I got following results:
|
||||
|
||||
See, even image rotation doesn't affect much on this comparison.
|
||||
|
||||
@sa [Hu-Moments](http://en.wikipedia.org/wiki/Image_moment#Rotation_invariant_moments) are seven
|
||||
@note [Hu-Moments](http://en.wikipedia.org/wiki/Image_moment#Rotation_invariant_moments) are seven
|
||||
moments invariant to translation, rotation and scale. Seventh one is skew-invariant. Those values
|
||||
can be found using **cv.HuMoments()** function.
|
||||
|
||||
|
@ -94,7 +94,7 @@ hist is same as we calculated before. But bins will have 257 elements, because N
|
||||
as 0-0.99, 1-1.99, 2-2.99 etc. So final range would be 255-255.99. To represent that, they also add
|
||||
256 at end of bins. But we don't need that 256. Upto 255 is sufficient.
|
||||
|
||||
@sa Numpy has another function, **np.bincount()** which is much faster than (around 10X)
|
||||
@note Numpy has another function, **np.bincount()** which is much faster than (around 10X)
|
||||
np.histogram(). So for one-dimensional histograms, you can better try that. Don't forget to set
|
||||
minlength = 256 in np.bincount. For example, hist = np.bincount(img.ravel(),minlength=256)
|
||||
|
||||
|
@ -51,7 +51,7 @@ Let's introduce the notation used to define formally a hyperplane:
|
||||
|
||||
where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bias*.
|
||||
|
||||
@sa A more in depth description of this and hyperplanes you can find in the section 4.5 (*Separating
|
||||
@note A more in depth description of this and hyperplanes you can find in the section 4.5 (*Separating
|
||||
Hyperplanes*) of the book: *Elements of Statistical Learning* by T. Hastie, R. Tibshirani and J. H.
|
||||
Friedman (@cite HTF01).
|
||||
|
||||
|
@ -164,7 +164,7 @@ Describing the methods goes well beyond the purpose of this tutorial. For that I
|
||||
the article introducing it. Nevertheless, you can get a good image of it by looking at the OpenCV
|
||||
implementation below.
|
||||
|
||||
@sa
|
||||
@note
|
||||
SSIM is described more in-depth in the: "Z. Wang, A. C. Bovik, H. R. Sheikh and E. P.
|
||||
Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE
|
||||
Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004." article.
|
||||
|
@ -256,8 +256,8 @@ public:
|
||||
|
||||
/** @brief Implementation of the Zach, Pock and Bischof Dual TV-L1 Optical Flow method.
|
||||
*
|
||||
* @sa C. Zach, T. Pock and H. Bischof, "A Duality Based Approach for Realtime TV-L1 Optical Flow".
|
||||
* @sa Javier Sanchez, Enric Meinhardt-Llopis and Gabriele Facciolo. "TV-L1 Optical Flow Estimation".
|
||||
* @note C. Zach, T. Pock and H. Bischof, "A Duality Based Approach for Realtime TV-L1 Optical Flow".
|
||||
* @note Javier Sanchez, Enric Meinhardt-Llopis and Gabriele Facciolo. "TV-L1 Optical Flow Estimation".
|
||||
*/
|
||||
class CV_EXPORTS OpticalFlowDual_TVL1 : public DenseOpticalFlow
|
||||
{
|
||||
|
@ -642,7 +642,8 @@ public:
|
||||
documentation of source stream to know the right URL.
|
||||
@param apiPreference preferred Capture API backends to use. Can be used to enforce a specific reader
|
||||
implementation if multiple are available: e.g. cv::CAP_FFMPEG or cv::CAP_IMAGES or cv::CAP_DSHOW.
|
||||
@sa The list of supported API backends cv::VideoCaptureAPIs
|
||||
|
||||
@sa cv::VideoCaptureAPIs
|
||||
*/
|
||||
CV_WRAP VideoCapture(const String& filename, int apiPreference);
|
||||
|
||||
@ -653,7 +654,7 @@ public:
|
||||
Use a `domain_offset` to enforce a specific reader implementation if multiple are available like cv::CAP_FFMPEG or cv::CAP_IMAGES or cv::CAP_DSHOW.
|
||||
e.g. to open Camera 1 using the MS Media Foundation API use `index = 1 + cv::CAP_MSMF`
|
||||
|
||||
@sa The list of supported API backends cv::VideoCaptureAPIs
|
||||
@sa cv::VideoCaptureAPIs
|
||||
*/
|
||||
CV_WRAP VideoCapture(int index);
|
||||
|
||||
@ -665,7 +666,7 @@ public:
|
||||
@param apiPreference preferred Capture API backends to use. Can be used to enforce a specific reader
|
||||
implementation if multiple are available: e.g. cv::CAP_DSHOW or cv::CAP_MSMF or cv::CAP_V4L2.
|
||||
|
||||
@sa The list of supported API backends cv::VideoCaptureAPIs
|
||||
@sa cv::VideoCaptureAPIs
|
||||
*/
|
||||
CV_WRAP VideoCapture(int index, int apiPreference);
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user