fix minor issues in calib3d Docs

This commit is contained in:
shyama7004 2025-02-17 12:38:00 +05:30
parent ae25c3194f
commit 18c0368840
4 changed files with 21 additions and 22 deletions

View File

@ -131,7 +131,7 @@ Explanation
The formation of the equations I mentioned above aims
to finding major patterns in the input: in case of the chessboard this are corners of the
squares and for the circles, well, the circles themselves. ChArUco board is equivalent to
chessboard, but corners are mached by ArUco markers. The position of these will form the
chessboard, but corners are matched by ArUco markers. The position of these will form the
result which will be written into the *pointBuf* vector.
@snippet samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp find_pattern
Depending on the type of the input pattern you use either the @ref cv::findChessboardCorners or
@ -144,9 +144,9 @@ Explanation
@note Board size and amount of matched points is different for chessboard, circles grid and ChArUco.
All chessboard related algorithm expects amount of inner corners as board width and height.
Board size of circles grid is just amount of circles by both grid dimentions. ChArUco board size
Board size of circles grid is just amount of circles by both grid dimensions. ChArUco board size
is defined in squares, but detection result is list of inner corners and that's why is smaller
by 1 in both dimentions.
by 1 in both dimensions.
Then again in case of cameras we only take camera images when an input delay time is passed.
This is done in order to allow user moving the chessboard around and getting different images.

View File

@ -12,7 +12,7 @@ Interactive camera calibration application {#tutorial_interactive_calibration}
| Compatibility | OpenCV >= 3.1 |
According to classical calibration technique user must collect all data first and when run @ref cv::calibrateCamera function
According to classical calibration technique user must collect all data first and then run @ref cv::calibrateCamera function
to obtain camera parameters. If average re-projection error is huge or if estimated parameters seems to be wrong, process of
selection or collecting data and starting of @ref cv::calibrateCamera repeats.
@ -96,9 +96,9 @@ By default values of advanced parameters are stored in defaultConfig.xml
- *charuco_square_length*: size of square on chAruco board (in pixels)
- *charuco_marker_size*: size of Aruco markers on chAruco board (in pixels)
- *calibration_step*: interval in frames between launches of @ref cv::calibrateCamera
- *max_frames_num*: if number of frames for calibration is greater then this value frames filter starts working.
- *max_frames_num*: if number of frames for calibration is greater than this value frames filter starts working.
After filtration size of calibration dataset is equals to *max_frames_num*
- *min_frames_num*: if number of frames is greater then this value turns on auto flags tuning, undistorted view and quality evaluation
- *min_frames_num*: if number of frames is greater than this value turns on auto flags tuning, undistorted view and quality evaluation
- *solver_eps*: precision of Levenberg-Marquardt solver in @ref cv::calibrateCamera
- *solver_max_iters*: iterations limit of solver
- *fast_solver*: if this value is nonzero and Lapack is found QR decomposition is used instead of SVD in solver.
@ -129,7 +129,7 @@ This pattern is very sensitive to quality of production and measurements.
Data filtration
------
When size of calibration dataset is greater then *max_frames_num* starts working
When size of calibration dataset is greater than *max_frames_num* starts working
data filter. It tries to remove "bad" frames from dataset. Filter removes the frame
on which \f$loss\_function\f$ takes maximum.

View File

@ -14,8 +14,7 @@ Real Time pose estimation of a textured object {#tutorial_real_time_pose}
Nowadays, augmented reality is one of the top research topic in computer vision and robotics fields.
The most elemental problem in augmented reality is the estimation of the camera pose respect of an
object in the case of computer vision area to do later some 3D rendering or in the case of robotics
obtain an object pose in order to grasp it and do some manipulation. However, this is not a trivial
object in the case of computer vision area to perform subsequent 3D rendering or, in robotics, to obtain an object pose for grasping and manipulation. However, this is not a trivial
problem to solve due to the fact that the most common issue in image processing is the computational
cost of applying a lot of algorithms or mathematical operations for solving a problem which is basic
and immediately for humans.
@ -23,7 +22,7 @@ and immediately for humans.
Goal
----
In this tutorial is explained how to build a real time application to estimate the camera pose in
This tutorial explains how to build a real-time application to estimate the camera pose in
order to track a textured object with six degrees of freedom given a 2D image and its 3D textured
model.
@ -74,7 +73,7 @@ The tutorial consists of two main programs:
-# **Model registration**
This application is exclusive to whom don't have a 3D textured model of the object to be detected.
This application is intended for users who do not have a 3D textured model of the object to be detected.
You can use this program to create your own textured 3D model. This program only works for planar
objects, then if you want to model an object with complex shape you should use a sophisticated
software to create it.
@ -82,7 +81,7 @@ The tutorial consists of two main programs:
The application needs an input image of the object to be registered and its 3D mesh. We have also
to provide the intrinsic parameters of the camera with which the input image was taken. All the
files need to be specified using the absolute path or the relative one from your applications
working directory. If none files are specified the program will try to open the provided default
working directory. If no files are specified the program will try to open the provided default
parameters.
The application starts up extracting the ORB features and descriptors from the input image and
@ -97,7 +96,7 @@ The tutorial consists of two main programs:
-# **Model detection**
The aim of this application is estimate in real time the object pose given its 3D textured model.
The aim of this application is to estimate in real time the object pose given its 3D textured model.
The application starts up loading the 3D textured model in YAML file format with the same
structure explained in the model registration program. From the scene, the ORB features and
@ -106,7 +105,7 @@ The tutorial consists of two main programs:
Using the found matches along with @ref cv::solvePnPRansac function the `R` and `t` of
the camera are computed. Finally, a KalmanFilter is applied in order to reject bad poses.
In the case that you compiled OpenCV with the samples, you can find it in opencv/build/bin/cpp-tutorial-pnp_detection\`.
In the case that you compiled OpenCV with the samples, you can find it in opencv/build/bin/cpp-tutorial-pnp_detection`.
Then you can run the application and change some parameters:
@code{.cpp}
This program shows how to detect an object given its 3D textured model. You can choose to use a recorded video or the webcam.
@ -326,7 +325,7 @@ Here is explained in detail the code for the real time application:
descriptors, match using *two Nearest Neighbour* the extracted descriptors with the given model
descriptors and vice versa. Then, a ratio test is applied to the two direction matches in order to
remove these matches which its distance ratio between the first and second best match is larger
than a given threshold. Finally, a symmetry test is applied in order the remove non symmetrical
than a given threshold. Finally, a symmetry test is applied in order to remove non symmetrical
matches.
@code{.cpp}
void RobustMatcher::robustMatch( const cv::Mat& frame, std::vector<cv::DMatch>& good_matches,
@ -489,7 +488,7 @@ Here is explained in detail the code for the real time application:
}
@endcode
In the following code are the 3th and 4th steps of the main algorithm. The first, calling the
In the following code are the 3rd and 4th steps of the main algorithm. The first, calling the
above function and the second taking the output inliers vector from RANSAC to get the 2D scene
points for drawing purpose. As seen in the code we must be sure to apply RANSAC if we have
matches, in the other case, the function @ref cv::solvePnPRansac crashes due to any OpenCV *bug*.

View File

@ -168,8 +168,8 @@ components:
plays significant role as it requires less iterations,
furthermore in average P3P solver has around 1.39
estimated models. Also, in new version of `solvePnPRansac(...)`
with `UsacParams` there is an options to pass empty intrinsic
matrix `InputOutputArray cameraMatrix`. If matrix is empty than
with `UsacParams` there is an option to pass empty intrinsic
matrix `InputOutputArray cameraMatrix`. If matrix is empty then
using Direct Linear Transformation algorithm (PnP with 6 points)
framework outputs not only rotation and translation vector but
also calibration matrix.
@ -201,7 +201,7 @@ a neighborhood graph. In framework there are 3 options to do it:
cells using hash-table. The method is described in @cite barath2019progressive. Less
accurate than `NEIGH_FLANN_RADIUS`, although significantly faster.
Note, `NEIGH_FLANN_RADIUS` and `NEIGH_FLANN_RADIUS` are not able to PnP
Note, `NEIGH_FLANN_RADIUS` and `NEIGH_GRID` are not able to PnP
solver, since there are 3D object points.
New flags:
@ -236,7 +236,7 @@ A few other important parameters:
2. `loIterations` number of iterations for Local Optimization method.
*The default value is 10*. By increasing `loIterations` the output
model could be more accurate, however, the computationial time may
model could be more accurate, however, the computational time may
also increase.
3. `loSampleSize` maximum sample number for Local Optimization. *The
@ -253,7 +253,7 @@ There are three new sample files in opencv/samples directory.
1. `epipolar_lines.cpp` input arguments of `main` function are two
paths to images. Then correspondences are found using
SIFT detector. Fundamental matrix is found using RANSAC from
tentative correspondences and epipolar lines are plot.
tentative correspondences and epipolar lines are plotted.
2. `essential_mat_reconstr.cpp` input arguments are path to data file
containing image names and single intrinsic matrix and directory
@ -266,4 +266,4 @@ There are three new sample files in opencv/samples directory.
3. `essential_mat_reconstr.py` the same functionality as in .cpp
file, however instead of clustering points to plane the 3D map of
object points is plot.
object points is plotted.