Add tutorial and codes for the homography tutorial.

This commit is contained in:
catree 2017-12-13 21:41:40 +01:00
parent 7b701fee60
commit b417fb0939
22 changed files with 1196 additions and 0 deletions

View File

@ -554,6 +554,13 @@
pages = {135--144},
publisher = {Springer}
}
@book{Ma:2003:IVI,
author = {Ma, Yi and Soatto, Stefano and Kosecka, Jana and Sastry, S. Shankar},
title = {An Invitation to 3-D Vision: From Images to Geometric Models},
year = {2003},
isbn = {0387008934},
publisher = {SpringerVerlag},
}
@ARTICLE{Malis,
author = {Malis, Ezio and Vargas, Manuel and others},
title = {Deeper understanding of the homography decomposition for vision-based control},

View File

@ -0,0 +1,467 @@
Basic concepts of the homography explained with code {#tutorial_homography}
=============================
Introduction
----
This tutorial will demonstrate the basic concepts of the homography with some codes.
For detailed explanations about the theory, please refer to a computer vision course or a computer vision book, e.g.:
* Multiple View Geometry in Computer Vision, @cite HartleyZ00.
* An Invitation to 3-D Vision: From Images to Geometric Models, @cite Ma:2003:IVI
* Computer Vision: Algorithms and Applications, @cite RS10
The tutorial code can be found [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/features2D/Homography).
The images used in this tutorial can be found [here](https://github.com/opencv/opencv/tree/master/samples/data) (`left*.jpg`).
Basic theory
----
### What is the homography matrix?
Briefly, the planar homography relates the transformation between two planes (up to a scale factor):
\f[
s
\begin{bmatrix}
x^{'} \\
y^{'} \\
1
\end{bmatrix} = H
\begin{bmatrix}
x \\
y \\
1
\end{bmatrix} =
\begin{bmatrix}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
1
\end{bmatrix}
\f]
The homography matrix is a `3x3` matrix but with 8 DoF (degrees of freedom) as it is estimated up to a scale. It is generally normalized (see also \ref lecture_16 "1")
with \f$ h_{33} = 1 \f$ or \f$ h_{11}^2 + h_{12}^2 + h_{13}^2 + h_{21}^2 + h_{22}^2 + h_{23}^2 + h_{31}^2 + h_{32}^2 + h_{33}^2 = 1 \f$.
The following examples show different kinds of transformation but all relate a transformation between two planes.
* a planar surface and the image plane (image taken from \ref projective_transformations "2")
![](images/homography_transformation_example1.jpg)
* a planar surface viewed by two camera positions (images taken from \ref szeliski "3" and \ref projective_transformations "2")
![](images/homography_transformation_example2.jpg)
* a rotating camera around its axis of projection, equivalent to consider that the points are on a plane at infinity (image taken from \ref projective_transformations "2")
![](images/homography_transformation_example3.jpg)
### How the homography transformation can be useful?
* Camera pose estimation from coplanar points for augmented reality with marker for instance (see the previous first example)
![](images/homography_pose_estimation.jpg)
* Perspective removal / correction (see the previous second example)
![](images/homography_perspective_correction.jpg)
* Panorama stitching (see the previous second and third example)
![](images/homography_panorama_stitching.jpg)
Demonstration codes
----
### Demo 1: Pose estimation from coplanar points
\note Please note that the code to estimate the camera pose from the homography is an example and you should use instead @ref cv::solvePnP if you want to estimate the camera pose for a planar or an arbitrary object.
The homography can be estimated using for instance the Direct Linear Transform (DLT) algorithm (see \ref lecture_16 "1" for more information).
As the object is planar, the transformation between points expressed in the object frame and projected points into the image plane expressed in the normalized camera frame is a homography. Only because the object is planar,
the camera pose can be retrieved from the homography, assuming the camera intrinsic parameters are known (see \ref projective_transformations "2" or \ref answer_dsp "4").
This can be tested easily using a chessboard object and `findChessboardCorners()` to get the corner locations in the image.
The first thing consists to detect the chessboard corners, the chessboard size (`patternSize`), here `9x6`, is required:
@snippet tutorial_homography_ex1_pose_from_homography.cpp find-chessboard-corners
![](images/homography_pose_chessboard_corners.jpg)
The object points expressed in the object frame can be computed easily knowing the size of a chessboard square:
@snippet tutorial_homography_ex1_pose_from_homography.cpp compute-chessboard-object-points
The coordinate `Z=0` must be removed for the homography estimation part:
@snippet tutorial_homography_ex1_pose_from_homography.cpp compute-object-points
The image points expressed in the normalized camera can be computed from the corner points and by applying a reverse perspective transformation using the camera intrinsics and the distortion coefficients:
@snippet tutorial_homography_ex1_pose_from_homography.cpp load-intrinsics
@snippet tutorial_homography_ex1_pose_from_homography.cpp compute-image-points
The homography can then be estimated with:
@snippet tutorial_homography_ex1_pose_from_homography.cpp estimate-homography
A quick solution to retrieve the pose from the homography matrix is (see \ref pose_ar "5"):
@snippet tutorial_homography_ex1_pose_from_homography.cpp pose-from-homography
\f[
\begin{align*}
\boldsymbol{X} &= \left( X, Y, 0, 1 \right ) \\
\boldsymbol{x} &= \boldsymbol{P}\boldsymbol{X} \\
&= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{r_3} \hspace{0.5em} \boldsymbol{t} \right ]
\begin{pmatrix}
X \\
Y \\
0 \\
1
\end{pmatrix} \\
&= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ]
\begin{pmatrix}
X \\
Y \\
1
\end{pmatrix} \\
&= \boldsymbol{H}
\begin{pmatrix}
X \\
Y \\
1
\end{pmatrix}
\end{align*}
\f]
\f[
\begin{align*}
\boldsymbol{H} &= \lambda \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\
\boldsymbol{K}^{-1} \boldsymbol{H} &= \lambda \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\
\boldsymbol{P} &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \left( \boldsymbol{r_1} \times \boldsymbol{r_2} \right ) \hspace{0.5em} \boldsymbol{t} \right ]
\end{align*}
\f]
This is a quick solution (see also \ref projective_transformations "2") as this does not ensure that the resulting rotation matrix will be orthogonal and the scale is estimated roughly by normalize the first column to 1.
To check the result, the object frame projected into the image with the estimated camera pose is displayed:
![](images/homography_pose.jpg)
### Demo 2: Perspective correction
In this example, a source image will be transformed into a desired perspective view by computing the homography that maps the source points into the desired points.
The following image shows the source image (left) and the chessboard view that we want to transform into the desired chessboard view (right).
![Source and desired views](images/homography_source_desired_images.jpg)
The first step consists to detect the chessboard corners in the source and desired images:
@snippet tutorial_homography_ex2_perspective_correction.cpp find-corners
The homography is estimated easily with:
@snippet tutorial_homography_ex2_perspective_correction.cpp estimate-homography
To warp the source chessboard view into the desired chessboard view, we use @ref cv::warpPerspective
@snippet tutorial_homography_ex2_perspective_correction.cpp warp-chessboard
The result image is:
![](images/homography_perspective_correction_chessboard_warp.jpg)
To compute the coordinates of the source corners transformed by the homography:
@snippet tutorial_homography_ex2_perspective_correction.cpp compute-transformed-corners
To check the correctness of the calculation, the matching lines are displayed:
![](images/homography_perspective_correction_chessboard_matches.jpg)
### Demo 3: Homography from the camera displacement
The homography relates the transformation between two planes and it is possible to retrieve the corresponding camera displacement that allows to go from the first to the second plane view (see @cite Malis for more information).
Before going into the details that allow to compute the homography from the camera displacement, some recalls about camera pose and homogeneous transformation.
The function @ref cv::solvePnP allows to compute the camera pose from the correspondences 3D object points (points expressed in the object frame) and the projected 2D image points (object points viewed in the image).
The intrinsic parameters and the distortion coefficients are required (see the camera calibration process).
\f[
\begin{align*}
s
\begin{bmatrix}
u \\
v \\
1
\end{bmatrix} &=
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_x \\
r_{21} & r_{22} & r_{23} & t_y \\
r_{31} & r_{32} & r_{33} & t_z
\end{bmatrix}
\begin{bmatrix}
X_o \\
Y_o \\
Z_o \\
1
\end{bmatrix} \\
&= \boldsymbol{K} \hspace{0.2em} ^{c}\textrm{M}_o
\begin{bmatrix}
X_o \\
Y_o \\
Z_o \\
1
\end{bmatrix}
\end{align*}
\f]
\f$ \boldsymbol{K} \f$ is the intrinsic matrix and \f$ ^{c}\textrm{M}_o \f$ is the camera pose. The output of @ref cv::solvePnP is exactly this: `rvec` is the Rodrigues rotation vector and `tvec` the translation vector.
\f$ ^{c}\textrm{M}_o \f$ can be represented in a homogeneous form and allows to transform a point expressed in the object frame into the camera frame:
\f[
\begin{align*}
\begin{bmatrix}
X_c \\
Y_c \\
Z_c \\
1
\end{bmatrix} &=
\hspace{0.2em} ^{c}\textrm{M}_o
\begin{bmatrix}
X_o \\
Y_o \\
Z_o \\
1
\end{bmatrix} \\
&=
\begin{bmatrix}
^{c}\textrm{R}_o & ^{c}\textrm{t}_o \\
0_{1\times3} & 1
\end{bmatrix}
\begin{bmatrix}
X_o \\
Y_o \\
Z_o \\
1
\end{bmatrix} \\
&=
\begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_x \\
r_{21} & r_{22} & r_{23} & t_y \\
r_{31} & r_{32} & r_{33} & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
X_o \\
Y_o \\
Z_o \\
1
\end{bmatrix}
\end{align*}
\f]
Transform a point expressed in one frame to another frame can be easily done with matrix multiplication:
* \f$ ^{c_1}\textrm{M}_o \f$ is the camera pose for the camera 1
* \f$ ^{c_2}\textrm{M}_o \f$ is the camera pose for the camera 2
To transform a 3D point expressed in the camera 1 frame to the camera 2 frame:
\f[
^{c_2}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} ^{o}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} \left( ^{c_1}\textrm{M}_{o} \right )^{-1} =
\begin{bmatrix}
^{c_2}\textrm{R}_{o} & ^{c_2}\textrm{t}_{o} \\
0_{3 \times 1} & 1
\end{bmatrix} \cdot
\begin{bmatrix}
^{c_1}\textrm{R}_{o}^T & - \hspace{0.2em} ^{c_1}\textrm{R}_{o}^T \cdot \hspace{0.2em} ^{c_1}\textrm{t}_{o} \\
0_{1 \times 3} & 1
\end{bmatrix}
\f]
In this example, we will compute the camera displacement between two camera poses with respect to the chessboard object. The first step consists to compute the camera poses for the two images:
@snippet tutorial_homography_ex3_homography_from_camera_displacement.cpp compute-poses
![](images/homography_camera_displacement_poses.jpg)
The camera displacement can be computed from the camera poses using the formulas above:
@snippet tutorial_homography_ex3_homography_from_camera_displacement.cpp compute-c2Mc1
The homography related to a specific plane computed from the camera displacement is:
![By Homography-transl.svg: Per Rosengren derivative work: Appoose (Homography-transl.svg) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons](images/homography_camera_displacement.png)
On this figure, `n` is the normal vector of the plane and `d` the distance between the camera frame and the plane along the plane normal.
The [equation](https://en.wikipedia.org/wiki/Homography_(computer_vision)#3D_plane_to_plane_equation) to compute the homography from the camera displacement is:
\f[
^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} - \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d}
\f]
Where \f$ ^{2}\textrm{H}_{1} \f$ is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, \f$ ^{2}\textrm{R}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \f$
is the rotation matrix that represents the rotation between the two camera frames and \f$ ^{2}\textrm{t}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \left( - \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \cdot \hspace{0.1em} ^{c_1}\textrm{t}_{o} \right ) + \hspace{0.1em} ^{c_2}\textrm{t}_{o} \f$
the translation vector between the two camera frames.
Here the normal vector `n` is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in our case directly with:
@snippet tutorial_homography_ex3_homography_from_camera_displacement.cpp compute-plane-normal-at-camera-pose-1
The distance `d` can be computed as the dot product between the plane normal and a point on the plane or by computing the [plane equation](http://mathworld.wolfram.com/Plane.html) and using the D coefficient:
@snippet tutorial_homography_ex3_homography_from_camera_displacement.cpp compute-plane-distance-to-the-camera-frame-1
The projective homography matrix \f$ \textbf{G} \f$ can be computed from the Euclidean homography \f$ \textbf{H} \f$ using the intrinsic matrix \f$ \textbf{K} \f$ (see @cite Malis), here assuming the same camera between the two plane views:
\f[
\textbf{G} = \gamma \textbf{K} \textbf{H} \textbf{K}^{-1}
\f]
@snippet tutorial_homography_ex3_homography_from_camera_displacement.cpp compute-homography
In our case, the Z-axis of the chessboard goes inside the object whereas in the homography figure it goes outside. This is just a matter of sign:
\f[
^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} + \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d}
\f]
@snippet tutorial_homography_ex3_homography_from_camera_displacement.cpp compute-homography-from-camera-displacement
We will now compare the projective homography computed from the camera displacement with the one estimated with @ref cv::findHomography
```
findHomography H:
[0.32903393332201, -1.244138808862929, 536.4769088231476;
0.6969763913334046, -0.08935909072571542, -80.34068504082403;
0.00040511729592961, -0.001079740100565013, 0.9999999999999999]
homography from camera displacement:
[0.4160569997384721, -1.306889006892538, 553.7055461075881;
0.7917584252773352, -0.06341244158456338, -108.2770029401219;
0.0005926357240956578, -0.001020651672127799, 1]
```
The homography matrices are similar. If we compare the image 1 warped using both homography matrices:
![Left: image warped using the homography estimated. Right: using the homography computed from the camera displacement](images/homography_camera_displacement_compare.jpg)
Visually, it is hard to distinguish a difference between the result image from the homography computed from the camera displacement and the one estimated with @ref cv::findHomography function.
### Demo 4: Decompose the homography matrix
OpenCV 3 contains the function @ref cv::decomposeHomographyMat which allows to decompose the homography matrix to a set of rotations, translations and plane normals.
First we will decompose the homography matrix computed from the camera displacement:
@snippet tutorial_homography_ex4_decompose_homography.cpp compute-homography-from-camera-displacement
The results of @ref cv::decomposeHomographyMat are:
@snippet tutorial_homography_ex4_decompose_homography.cpp decompose-homography-from-camera-displacement
```
Solution 0:
rvec from homography decomposition: [-0.0919829920641369, -0.5372581036567992, 1.310868863540717]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [-0.7747961019053186, -0.02751124463434032, -0.6791980037590677] and scaled by d: [-0.1578091561210742, -0.005603443652993778, -0.1383378976078466]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [-0.1973513139420648, 0.6283451996579074, -0.7524857267431757]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
Solution 1:
rvec from homography decomposition: [-0.0919829920641369, -0.5372581036567992, 1.310868863540717]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [0.7747961019053186, 0.02751124463434032, 0.6791980037590677] and scaled by d: [0.1578091561210742, 0.005603443652993778, 0.1383378976078466]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [0.1973513139420648, -0.6283451996579074, 0.7524857267431757]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
Solution 2:
rvec from homography decomposition: [0.1053487907109967, -0.1561929144786397, 1.401356552358475]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [-0.4666552552894618, 0.1050032934770042, -0.913007654671646] and scaled by d: [-0.0950475510338766, 0.02138689274867372, -0.1859598508065552]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [-0.3131715472900788, 0.8421206145721947, -0.4390403768225507]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
Solution 3:
rvec from homography decomposition: [0.1053487907109967, -0.1561929144786397, 1.401356552358475]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [0.4666552552894618, -0.1050032934770042, 0.913007654671646] and scaled by d: [0.0950475510338766, -0.02138689274867372, 0.1859598508065552]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [0.3131715472900788, -0.8421206145721947, 0.4390403768225507]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
```
The result of the decomposition of the homography matrix can only be recovered up to a scale factor that corresponds in fact to the distance `d` as the normal is unit length.
As you can see, there is one solution that matches almost perfectly with the computed camera displacement. As stated in the documentation:
```
At least two of the solutions may further be invalidated if point correspondences are available by applying positive depth constraint (all points must be in front of the camera).
```
As the result of the decomposition is a camera displacement, if we have the initial camera pose \f$ ^{c_1}\textrm{M}_{o} \f$, we can compute the current camera pose
\f$ ^{c_2}\textrm{M}_{o} = \hspace{0.2em} ^{c_2}\textrm{M}_{c_1} \cdot \hspace{0.1em} ^{c_1}\textrm{M}_{o} \f$ and test if the 3D object points that belong to the plane are projected in front of the camera or not.
Another solution could be to retain the solution with the closest normal if we know the plane normal expressed at the camera 1 pose.
The same thing but with the homography matrix estimated with @ref cv::findHomography
```
Solution 0:
rvec from homography decomposition: [0.1552207729599141, -0.152132696119647, 1.323678695078694]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [-0.4482361704818117, 0.02485247635491922, -1.034409687207331] and scaled by d: [-0.09129598307571339, 0.005061910238634657, -0.2106868109173855]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [-0.1384902722707529, 0.9063331452766947, -0.3992250922214516]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
Solution 1:
rvec from homography decomposition: [0.1552207729599141, -0.152132696119647, 1.323678695078694]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [0.4482361704818117, -0.02485247635491922, 1.034409687207331] and scaled by d: [0.09129598307571339, -0.005061910238634657, 0.2106868109173855]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [0.1384902722707529, -0.9063331452766947, 0.3992250922214516]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
Solution 2:
rvec from homography decomposition: [-0.2886605671759886, -0.521049903923871, 1.381242030882511]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [-0.8705961357284295, 0.1353018038908477, -0.7037702049789747] and scaled by d: [-0.177321544550518, 0.02755804196893467, -0.1433427218822783]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [-0.2284582117722427, 0.6009247303964522, -0.7659610393954643]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
Solution 3:
rvec from homography decomposition: [-0.2886605671759886, -0.521049903923871, 1.381242030882511]
rvec from camera displacement: [-0.09198299206413783, -0.5372581036567995, 1.310868863540717]
tvec from homography decomposition: [0.8705961357284295, -0.1353018038908477, 0.7037702049789747] and scaled by d: [0.177321544550518, -0.02755804196893467, 0.1433427218822783]
tvec from camera displacement: [0.1578091561210745, 0.005603443652993617, 0.1383378976078466]
plane normal from homography decomposition: [0.2284582117722427, -0.6009247303964522, 0.7659610393954643]
plane normal at camera 1 pose: [0.1973513139420654, -0.6283451996579068, 0.752485726743176]
```
Again, there is also a solution that matches with the computed camera displacement.
Additional references
----
* \anchor lecture_16 1. [Lecture 16: Planar Homographies](http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf), Robert Collins
* \anchor projective_transformations 2. [2D projective transformations (homographies)](https://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec04.pdf), Christiano Gava, Gabriele Bleser
* \anchor szeliski 3. [Computer Vision: Algorithms and Applications](http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf), Richard Szeliski
* \anchor answer_dsp 4. [Step by Step Camera Pose Estimation for Visual Tracking and Planar Markers](https://dsp.stackexchange.com/a/2737)
* \anchor pose_ar 5. [Pose from homography estimation](https://team.inria.fr/lagadic/camera_localization/tutorial-pose-dlt-planar-opencv.html)

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -93,3 +93,10 @@ OpenCV.
*Author:* Fedor Morozov
Using *AKAZE* and *ORB* for planar object tracking.
- @subpage tutorial_homography
*Compatibility:* \> OpenCV 3.0
This tutorial will explain the basic concepts of the homography with some
demonstration codes.

View File

@ -0,0 +1,23 @@
%YAML:1.0
---
image_width: 640
image_height: 480
board_width: 9
board_height: 6
square_size: 1.
aspectRatio: 1.
flags: 2
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 5.3591575307485539e+02, 0., 3.4228314953752817e+02, 0.,
5.3591575307485539e+02, 2.3557082321320789e+02, 0., 0., 1. ]
distortion_coefficients: !!opencv-matrix
rows: 5
cols: 1
dt: d
data: [ -2.6637290673868386e-01, -3.8586722644459073e-02,
1.7831841406179300e-03, -2.8122035403651473e-04,
2.3838760574917545e-01 ]
avg_reprojection_error: 3.9259109564815858e-01

View File

@ -0,0 +1,156 @@
#include <iostream>
#include <opencv2/opencv_modules.hpp>
#ifdef HAVE_OPENCV_ARUCO
#include <opencv2/opencv.hpp>
#include <opencv2/aruco.hpp>
using namespace std;
using namespace cv;
namespace
{
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
void calcChessboardCorners(Size boardSize, float squareSize, vector<Point3f>& corners, Pattern patternType = CHESSBOARD)
{
corners.resize(0);
switch (patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
//! [compute-chessboard-object-points]
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float(j*squareSize),
float(i*squareSize), 0));
//! [compute-chessboard-object-points]
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(Error::StsBadArg, "Unknown pattern type\n");
}
}
void poseEstimationFromCoplanarPoints(const string &imgPath, const string &intrinsicsPath, const Size &patternSize,
const float squareSize)
{
Mat img = imread(imgPath);
Mat img_corners = img.clone(), img_pose = img.clone();
//! [find-chessboard-corners]
vector<Point2f> corners;
bool found = findChessboardCorners(img, patternSize, corners);
//! [find-chessboard-corners]
if (!found)
{
cout << "Cannot find chessboard corners." << endl;
return;
}
drawChessboardCorners(img_corners, patternSize, corners, found);
imshow("Chessboard corners detection", img_corners);
//! [compute-object-points]
vector<Point3f> objectPoints;
calcChessboardCorners(patternSize, squareSize, objectPoints);
vector<Point2f> objectPointsPlanar;
for (size_t i = 0; i < objectPoints.size(); i++)
{
objectPointsPlanar.push_back(Point2f(objectPoints[i].x, objectPoints[i].y));
}
//! [compute-object-points]
//! [load-intrinsics]
FileStorage fs(intrinsicsPath, FileStorage::READ);
Mat cameraMatrix, distCoeffs;
fs["camera_matrix"] >> cameraMatrix;
fs["distortion_coefficients"] >> distCoeffs;
//! [load-intrinsics]
//! [compute-image-points]
vector<Point2f> imagePoints;
undistortPoints(corners, imagePoints, cameraMatrix, distCoeffs);
//! [compute-image-points]
//! [estimate-homography]
Mat H = findHomography(objectPointsPlanar, imagePoints);
cout << "H:\n" << H << endl;
//! [estimate-homography]
//! [pose-from-homography]
// Normalization to ensure that ||c1|| = 1
double norm = sqrt(H.at<double>(0,0)*H.at<double>(0,0) +
H.at<double>(1,0)*H.at<double>(1,0) +
H.at<double>(2,0)*H.at<double>(2,0));
H /= norm;
Mat c1 = H.col(0);
Mat c2 = H.col(1);
Mat c3 = c1.cross(c2);
Mat tvec = H.col(2);
Mat R(3, 3, CV_64F);
for (int i = 0; i < 3; i++)
{
R.at<double>(i,0) = c1.at<double>(i,0);
R.at<double>(i,1) = c2.at<double>(i,0);
R.at<double>(i,2) = c3.at<double>(i,0);
}
//! [pose-from-homography]
//! [display-pose]
Mat rvec;
Rodrigues(R, rvec);
aruco::drawAxis(img_pose, cameraMatrix, distCoeffs, rvec, tvec, 2*squareSize);
imshow("Pose from coplanar points", img_pose);
waitKey();
//! [display-pose]
}
const char* about = "Code for homography tutorial.\n"
"Example 1: pose from homography with coplanar points.\n";
const char* params
= "{ h help | false | print usage }"
"{ image | | path to a chessboard image (left04.jpg) }"
"{ intrinsics | | path to camera intrinsics (left_intrinsics.yml) }"
"{ width w | 9 | chessboard width }"
"{ height h | 6 | chessboard height }"
"{ square_size | 0.025 | chessboard square size }";
}
int main(int argc, char *argv[])
{
CommandLineParser parser(argc, argv, params);
if (parser.get<bool>("help"))
{
cout << about << endl;
parser.printMessage();
return 0;
}
Size patternSize(parser.get<int>("width"), parser.get<int>("height"));
float squareSize = (float) parser.get<double>("square_size");
poseEstimationFromCoplanarPoints(parser.get<string>("image"),
parser.get<string>("intrinsics"),
patternSize, squareSize);
return 0;
}
#else
int main()
{
std::cerr << "FATAL ERROR: This sample requires opencv_aruco module (from opencv_contrib)" << std::endl;
return 0;
}
#endif

View File

@ -0,0 +1,131 @@
#include <iostream>
#include <opencv2/opencv_modules.hpp>
#ifdef HAVE_OPENCV_ARUCO
#include <opencv2/opencv.hpp>
#include <opencv2/aruco.hpp>
using namespace std;
using namespace cv;
namespace
{
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
Scalar randomColor( RNG& rng )
{
int icolor = (unsigned int) rng;
return Scalar( icolor & 255, (icolor >> 8) & 255, (icolor >> 16) & 255 );
}
void calcChessboardCorners(Size boardSize, float squareSize, vector<Point3f>& corners, Pattern patternType = CHESSBOARD)
{
corners.resize(0);
switch (patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(Error::StsBadArg, "Unknown pattern type\n");
}
}
void perspectiveCorrection(const string &img1Path, const string &img2Path, const Size &patternSize, RNG &rng)
{
Mat img1 = imread(img1Path);
Mat img2 = imread(img2Path);
//! [find-corners]
vector<Point2f> corners1, corners2;
bool found1 = findChessboardCorners(img1, patternSize, corners1);
bool found2 = findChessboardCorners(img2, patternSize, corners2);
//! [find-corners]
if (!found1 || !found2)
{
cout << "Error, cannot find the chessboard corners in both images." << endl;
return;
}
//! [estimate-homography]
Mat H = findHomography(corners1, corners2);
cout << "H:\n" << H << endl;
//! [estimate-homography]
//! [warp-chessboard]
Mat img1_warp;
warpPerspective(img1, img1_warp, H, img1.size());
//! [warp-chessboard]
Mat img_draw_warp;
hconcat(img2, img1_warp, img_draw_warp);
imshow("Desired chessboard view / Warped source chessboard view", img_draw_warp);
//! [compute-transformed-corners]
Mat img_draw_matches;
hconcat(img1, img2, img_draw_matches);
for (size_t i = 0; i < corners1.size(); i++)
{
Mat pt1 = (Mat_<double>(3,1) << corners1[i].x, corners1[i].y, 1);
Mat pt2 = H * pt1;
pt2 /= pt2.at<double>(2);
Point end( (int) (img1.cols + pt2.at<double>(0)), (int) pt2.at<double>(1) );
line(img_draw_matches, corners1[i], end, randomColor(rng), 2);
}
imshow("Draw matches", img_draw_matches);
waitKey();
//! [compute-transformed-corners]
}
const char* about = "Code for homography tutorial.\n"
"Example 2: perspective correction.\n";
const char* params
= "{ h help | false | print usage }"
"{ image1 | | path to the source chessboard image (left02.jpg) }"
"{ image2 | | path to the desired chessboard image (left01.jpg) }"
"{ width w | 9 | chessboard width }"
"{ height h | 6 | chessboard height }";
}
int main(int argc, char *argv[])
{
cv::RNG rng( 0xFFFFFFFF );
CommandLineParser parser(argc, argv, params);
if (parser.get<bool>("help"))
{
cout << about << endl;
parser.printMessage();
return 0;
}
Size patternSize(parser.get<int>("width"), parser.get<int>("height"));
perspectiveCorrection(parser.get<string>("image1"),
parser.get<string>("image2"),
patternSize, rng);
return 0;
}
#else
int main()
{
std::cerr << "FATAL ERROR: This sample requires opencv_aruco module (from opencv_contrib)" << std::endl;
return 0;
}
#endif

View File

@ -0,0 +1,210 @@
#include <iostream>
#include <opencv2/opencv_modules.hpp>
#ifdef HAVE_OPENCV_ARUCO
#include <opencv2/opencv.hpp>
#include <opencv2/aruco.hpp>
using namespace std;
using namespace cv;
namespace
{
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
void calcChessboardCorners(Size boardSize, float squareSize, vector<Point3f>& corners, Pattern patternType = CHESSBOARD)
{
corners.resize(0);
switch (patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(Error::StsBadArg, "Unknown pattern type\n");
}
}
//! [compute-homography]
Mat computeHomography(const Mat &R_1to2, const Mat &tvec_1to2, const double d_inv, const Mat &normal)
{
Mat homography = R_1to2 + d_inv * tvec_1to2*normal.t();
return homography;
}
//! [compute-homography]
Mat computeHomography(const Mat &R1, const Mat &tvec1, const Mat &R2, const Mat &tvec2,
const double d_inv, const Mat &normal)
{
Mat homography = R2 * R1.t() + d_inv * (-R2 * R1.t() * tvec1 + tvec2) * normal.t();
return homography;
}
//! [compute-c2Mc1]
void computeC2MC1(const Mat &R1, const Mat &tvec1, const Mat &R2, const Mat &tvec2,
Mat &R_1to2, Mat &tvec_1to2)
{
//c2Mc1 = c2Mo * oMc1 = c2Mo * c1Mo.inv()
R_1to2 = R2 * R1.t();
tvec_1to2 = R2 * (-R1.t()*tvec1) + tvec2;
}
//! [compute-c2Mc1]
void homographyFromCameraDisplacement(const string &img1Path, const string &img2Path, const Size &patternSize,
const float squareSize, const string &intrinsicsPath)
{
Mat img1 = imread(img1Path);
Mat img2 = imread(img2Path);
//! [compute-poses]
vector<Point2f> corners1, corners2;
bool found1 = findChessboardCorners(img1, patternSize, corners1);
bool found2 = findChessboardCorners(img2, patternSize, corners2);
if (!found1 || !found2)
{
cout << "Error, cannot find the chessboard corners in both images." << endl;
return;
}
vector<Point3f> objectPoints;
calcChessboardCorners(patternSize, squareSize, objectPoints);
FileStorage fs(intrinsicsPath, FileStorage::READ);
Mat cameraMatrix, distCoeffs;
fs["camera_matrix"] >> cameraMatrix;
fs["distortion_coefficients"] >> distCoeffs;
Mat rvec1, tvec1;
solvePnP(objectPoints, corners1, cameraMatrix, distCoeffs, rvec1, tvec1);
Mat rvec2, tvec2;
solvePnP(objectPoints, corners2, cameraMatrix, distCoeffs, rvec2, tvec2);
//! [compute-poses]
Mat img1_copy_pose = img1.clone(), img2_copy_pose = img2.clone();
Mat img_draw_poses;
aruco::drawAxis(img1_copy_pose, cameraMatrix, distCoeffs, rvec1, tvec1, 2*squareSize);
aruco::drawAxis(img2_copy_pose, cameraMatrix, distCoeffs, rvec2, tvec2, 2*squareSize);
hconcat(img1_copy_pose, img2_copy_pose, img_draw_poses);
imshow("Chessboard poses", img_draw_poses);
//! [compute-camera-displacement]
Mat R1, R2;
Rodrigues(rvec1, R1);
Rodrigues(rvec2, R2);
Mat R_1to2, t_1to2;
computeC2MC1(R1, tvec1, R2, tvec2, R_1to2, t_1to2);
Mat rvec_1to2;
Rodrigues(R_1to2, rvec_1to2);
//! [compute-camera-displacement]
//! [compute-plane-normal-at-camera-pose-1]
Mat normal = (Mat_<double>(3,1) << 0, 0, 1);
Mat normal1 = R1*normal;
//! [compute-plane-normal-at-camera-pose-1]
//! [compute-plane-distance-to-the-camera-frame-1]
Mat origin(3, 1, CV_64F, Scalar(0));
Mat origin1 = R1*origin + tvec1;
double d_inv1 = 1.0 / normal1.dot(origin1);
//! [compute-plane-distance-to-the-camera-frame-1]
//! [compute-homography-from-camera-displacement]
Mat homography_euclidean = computeHomography(R_1to2, t_1to2, d_inv1, normal1);
Mat homography = cameraMatrix * homography_euclidean * cameraMatrix.inv();
homography /= homography.at<double>(2,2);
homography_euclidean /= homography_euclidean.at<double>(2,2);
//! [compute-homography-from-camera-displacement]
//Same but using absolute camera poses instead of camera displacement, just for check
Mat homography_euclidean2 = computeHomography(R1, tvec1, R2, tvec2, d_inv1, normal1);
Mat homography2 = cameraMatrix * homography_euclidean2 * cameraMatrix.inv();
homography_euclidean2 /= homography_euclidean2.at<double>(2,2);
homography2 /= homography2.at<double>(2,2);
cout << "\nEuclidean Homography:\n" << homography_euclidean << endl;
cout << "Euclidean Homography 2:\n" << homography_euclidean2 << endl << endl;
//! [estimate-homography]
Mat H = findHomography(corners1, corners2);
cout << "\nfindHomography H:\n" << H << endl;
//! [estimate-homography]
cout << "homography from camera displacement:\n" << homography << endl;
cout << "homography from absolute camera poses:\n" << homography2 << endl << endl;
//! [warp-chessboard]
Mat img1_warp;
warpPerspective(img1, img1_warp, H, img1.size());
//! [warp-chessboard]
Mat img1_warp_custom;
warpPerspective(img1, img1_warp_custom, homography, img1.size());
imshow("Warped image using homography computed from camera displacement", img1_warp_custom);
Mat img_draw_compare;
hconcat(img1_warp, img1_warp_custom, img_draw_compare);
imshow("Warped images comparison", img_draw_compare);
Mat img1_warp_custom2;
warpPerspective(img1, img1_warp_custom2, homography2, img1.size());
imshow("Warped image using homography computed from absolute camera poses", img1_warp_custom2);
waitKey();
}
const char* about = "Code for homography tutorial.\n"
"Example 3: homography from the camera displacement.\n";
const char* params
= "{ h help | false | print usage }"
"{ image1 | | path to the source chessboard image (left02.jpg) }"
"{ image2 | | path to the desired chessboard image (left01.jpg) }"
"{ intrinsics | | path to camera intrinsics (left_intrinsics.yml) }"
"{ width w | 9 | chessboard width }"
"{ height h | 6 | chessboard height }"
"{ square_size | 0.025 | chessboard square size }";
}
int main(int argc, char *argv[])
{
CommandLineParser parser(argc, argv, params);
if (parser.get<bool>("help"))
{
cout << about << endl;
parser.printMessage();
return 0;
}
Size patternSize(parser.get<int>("width"), parser.get<int>("height"));
float squareSize = (float) parser.get<double>("square_size");
homographyFromCameraDisplacement(parser.get<string>("image1"),
parser.get<string>("image2"),
patternSize, squareSize,
parser.get<string>("intrinsics"));
return 0;
}
#else
int main()
{
std::cerr << "FATAL ERROR: This sample requires opencv_aruco module (from opencv_contrib)" << std::endl;
return 0;
}
#endif

View File

@ -0,0 +1,195 @@
#include <iostream>
#include <opencv2/opencv_modules.hpp>
#ifdef HAVE_OPENCV_ARUCO
#include <opencv2/opencv.hpp>
#include <opencv2/aruco.hpp>
using namespace std;
using namespace cv;
namespace
{
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
void calcChessboardCorners(Size boardSize, float squareSize, vector<Point3f>& corners, Pattern patternType = CHESSBOARD)
{
corners.resize(0);
switch (patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(Error::StsBadArg, "Unknown pattern type\n");
}
}
Mat computeHomography(const Mat &R_1to2, const Mat &tvec_1to2, const double d_inv, const Mat &normal)
{
Mat homography = R_1to2 + d_inv * tvec_1to2*normal.t();
return homography;
}
void computeC2MC1(const Mat &R1, const Mat &tvec1, const Mat &R2, const Mat &tvec2,
Mat &R_1to2, Mat &tvec_1to2)
{
//c2Mc1 = c2Mo * oMc1 = c2Mo * c1Mo.inv()
R_1to2 = R2 * R1.t();
tvec_1to2 = R2 * (-R1.t()*tvec1) + tvec2;
}
void decomposeHomography(const string &img1Path, const string &img2Path, const Size &patternSize,
const float squareSize, const string &intrinsicsPath)
{
Mat img1 = imread(img1Path);
Mat img2 = imread(img2Path);
vector<Point2f> corners1, corners2;
bool found1 = findChessboardCorners(img1, patternSize, corners1);
bool found2 = findChessboardCorners(img2, patternSize, corners2);
if (!found1 || !found2)
{
cout << "Error, cannot find the chessboard corners in both images." << endl;
return;
}
//! [compute-poses]
vector<Point3f> objectPoints;
calcChessboardCorners(patternSize, squareSize, objectPoints);
FileStorage fs(intrinsicsPath, FileStorage::READ);
Mat cameraMatrix, distCoeffs;
fs["camera_matrix"] >> cameraMatrix;
fs["distortion_coefficients"] >> distCoeffs;
Mat rvec1, tvec1;
solvePnP(objectPoints, corners1, cameraMatrix, distCoeffs, rvec1, tvec1);
Mat rvec2, tvec2;
solvePnP(objectPoints, corners2, cameraMatrix, distCoeffs, rvec2, tvec2);
//! [compute-poses]
//! [compute-camera-displacement]
Mat R1, R2;
Rodrigues(rvec1, R1);
Rodrigues(rvec2, R2);
Mat R_1to2, t_1to2;
computeC2MC1(R1, tvec1, R2, tvec2, R_1to2, t_1to2);
Mat rvec_1to2;
Rodrigues(R_1to2, rvec_1to2);
//! [compute-camera-displacement]
//! [compute-plane-normal-at-camera-pose-1]
Mat normal = (Mat_<double>(3,1) << 0, 0, 1);
Mat normal1 = R1*normal;
//! [compute-plane-normal-at-camera-pose-1]
//! [compute-plane-distance-to-the-camera-frame-1]
Mat origin(3, 1, CV_64F, Scalar(0));
Mat origin1 = R1*origin + tvec1;
double d_inv1 = 1.0 / normal1.dot(origin1);
//! [compute-plane-distance-to-the-camera-frame-1]
//! [compute-homography-from-camera-displacement]
Mat homography_euclidean = computeHomography(R_1to2, t_1to2, d_inv1, normal1);
Mat homography = cameraMatrix * homography_euclidean * cameraMatrix.inv();
homography /= homography.at<double>(2,2);
homography_euclidean /= homography_euclidean.at<double>(2,2);
//! [compute-homography-from-camera-displacement]
//! [decompose-homography-from-camera-displacement]
vector<Mat> Rs_decomp, ts_decomp, normals_decomp;
int solutions = decomposeHomographyMat(homography, cameraMatrix, Rs_decomp, ts_decomp, normals_decomp);
cout << "Decompose homography matrix computed from the camera displacement:" << endl << endl;
for (int i = 0; i < solutions; i++)
{
double factor_d1 = 1.0 / d_inv1;
Mat rvec_decomp;
Rodrigues(Rs_decomp[i], rvec_decomp);
cout << "Solution " << i << ":" << endl;
cout << "rvec from homography decomposition: " << rvec_decomp.t() << endl;
cout << "rvec from camera displacement: " << rvec_1to2.t() << endl;
cout << "tvec from homography decomposition: " << ts_decomp[i].t() << " and scaled by d: " << factor_d1 * ts_decomp[i].t() << endl;
cout << "tvec from camera displacement: " << t_1to2.t() << endl;
cout << "plane normal from homography decomposition: " << normals_decomp[i].t() << endl;
cout << "plane normal at camera 1 pose: " << normal1.t() << endl << endl;
}
//! [decompose-homography-from-camera-displacement]
//! [estimate homography]
Mat H = findHomography(corners1, corners2);
//! [estimate homography]
//! [decompose-homography-estimated-by-findHomography]
solutions = decomposeHomographyMat(H, cameraMatrix, Rs_decomp, ts_decomp, normals_decomp);
cout << "Decompose homography matrix estimated by findHomography():" << endl << endl;
for (int i = 0; i < solutions; i++)
{
double factor_d1 = 1.0 / d_inv1;
Mat rvec_decomp;
Rodrigues(Rs_decomp[i], rvec_decomp);
cout << "Solution " << i << ":" << endl;
cout << "rvec from homography decomposition: " << rvec_decomp.t() << endl;
cout << "rvec from camera displacement: " << rvec_1to2.t() << endl;
cout << "tvec from homography decomposition: " << ts_decomp[i].t() << " and scaled by d: " << factor_d1 * ts_decomp[i].t() << endl;
cout << "tvec from camera displacement: " << t_1to2.t() << endl;
cout << "plane normal from homography decomposition: " << normals_decomp[i].t() << endl;
cout << "plane normal at camera 1 pose: " << normal1.t() << endl << endl;
}
//! [decompose-homography-estimated-by-findHomography]
}
const char* about = "Code for homography tutorial.\n"
"Example 4: decompose the homography matrix.\n";
const char* params
= "{ h help | false | print usage }"
"{ image1 | | path to the source chessboard image (left02.jpg) }"
"{ image2 | | path to the desired chessboard image (left01.jpg) }"
"{ intrinsics | | path to camera intrinsics (left_intrinsics.yml) }"
"{ width w | 9 | chessboard width }"
"{ height h | 6 | chessboard height }"
"{ square_size | 0.025 | chessboard square size }";
}
int main(int argc, char *argv[])
{
CommandLineParser parser(argc, argv, params);
if (parser.get<bool>("help"))
{
cout << about << endl;
parser.printMessage();
return 0;
}
Size patternSize(parser.get<int>("width"), parser.get<int>("height"));
float squareSize = (float) parser.get<double>("square_size");
decomposeHomography(parser.get<string>("image1"),
parser.get<string>("image2"),
patternSize, squareSize,
parser.get<string>("intrinsics"));
return 0;
}
#else
int main()
{
std::cerr << "FATAL ERROR: This sample requires opencv_aruco module (from opencv_contrib)" << std::endl;
return 0;
}
#endif