mirror of
https://github.com/opencv/opencv.git
synced 2025-06-07 09:25:45 +08:00
Merge pull request #16366 from themechanicalcoder:features2D-tutorial-python
* Add python version of panorama_stitching_rotating_camera and perspective_correction * Updated code * added in the docs * added python code in the docs * docs change * Add java tutorial as well * Add toggle in documentation * Added the link for Java code * format code * Refactored code
This commit is contained in:
parent
86e5e8d765
commit
126b0d855f
@ -12,7 +12,9 @@ For detailed explanations about the theory, please refer to a computer vision co
|
||||
* An Invitation to 3-D Vision: From Images to Geometric Models, @cite Ma:2003:IVI
|
||||
* Computer Vision: Algorithms and Applications, @cite RS10
|
||||
|
||||
The tutorial code can be found [here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/features2D/Homography).
|
||||
The tutorial code can be found here [C++](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/features2D/Homography),
|
||||
[Python](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/features2D/Homography),
|
||||
[Java](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/features2D/Homography).
|
||||
The images used in this tutorial can be found [here](https://github.com/opencv/opencv/tree/3.4/samples/data) (`left*.jpg`).
|
||||
|
||||
Basic theory {#tutorial_homography_Basic_theory}
|
||||
@ -171,15 +173,45 @@ The following image shows the source image (left) and the chessboard view that w
|
||||
|
||||
The first step consists to detect the chessboard corners in the source and desired images:
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet perspective_correction.cpp find-corners
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/perspective_correction.py find-corners
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PerspectiveCorrection.java find-corners
|
||||
@end_toggle
|
||||
|
||||
The homography is estimated easily with:
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet perspective_correction.cpp estimate-homography
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/perspective_correction.py estimate-homography
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PerspectiveCorrection.java estimate-homography
|
||||
@end_toggle
|
||||
|
||||
To warp the source chessboard view into the desired chessboard view, we use @ref cv::warpPerspective
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet perspective_correction.cpp warp-chessboard
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/perspective_correction.py warp-chessboard
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PerspectiveCorrection.java warp-chessboard
|
||||
@end_toggle
|
||||
|
||||
The result image is:
|
||||
|
||||
@ -187,7 +219,17 @@ The result image is:
|
||||
|
||||
To compute the coordinates of the source corners transformed by the homography:
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet perspective_correction.cpp compute-transformed-corners
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/perspective_correction.py compute-transformed-corners
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PerspectiveCorrection.java compute-transformed-corners
|
||||
@end_toggle
|
||||
|
||||
To check the correctness of the calculation, the matching lines are displayed:
|
||||
|
||||
@ -499,17 +541,57 @@ The figure below shows the two generated views of the Suzanne model, with only a
|
||||
|
||||
With the known associated camera poses and the intrinsic parameters, the relative rotation between the two views can be computed:
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet panorama_stitching_rotating_camera.cpp extract-rotation
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/panorama_stitching_rotating_camera.py extract-rotation
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PanoramaStitchingRotatingCamera.java extract-rotation
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet panorama_stitching_rotating_camera.cpp compute-rotation-displacement
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/panorama_stitching_rotating_camera.py compute-rotation-displacement
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PanoramaStitchingRotatingCamera.java compute-rotation-displacement
|
||||
@end_toggle
|
||||
|
||||
Here, the second image will be stitched with respect to the first image. The homography can be calculated using the formula above:
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet panorama_stitching_rotating_camera.cpp compute-homography
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/panorama_stitching_rotating_camera.py compute-homography
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PanoramaStitchingRotatingCamera.java compute-homography
|
||||
@end_toggle
|
||||
|
||||
The stitching is made simply with:
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet panorama_stitching_rotating_camera.cpp stitch
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_python
|
||||
@snippet samples/python/tutorial_code/features2D/Homography/panorama_stitching_rotating_camera.py stitch
|
||||
@end_toggle
|
||||
|
||||
@add_toggle_java
|
||||
@snippet samples/java/tutorial_code/features2D/Homography/PanoramaStitchingRotatingCamera.java stitch
|
||||
@end_toggle
|
||||
|
||||
The resulting image is:
|
||||
|
||||
|
@ -0,0 +1,89 @@
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import org.opencv.core.*;
|
||||
import org.opencv.core.Range;
|
||||
import org.opencv.highgui.HighGui;
|
||||
import org.opencv.imgcodecs.Imgcodecs;
|
||||
import org.opencv.imgproc.Imgproc;
|
||||
|
||||
|
||||
class PanoramaStitchingRotatingCameraRun {
|
||||
void basicPanoramaStitching (String[] args) {
|
||||
String img1path = args[0], img2path = args[1];
|
||||
Mat img1 = new Mat(), img2 = new Mat();
|
||||
img1 = Imgcodecs.imread(img1path);
|
||||
img2 = Imgcodecs.imread(img2path);
|
||||
|
||||
//! [camera-pose-from-Blender-at-location-1]
|
||||
Mat c1Mo = new Mat( 4, 4, CvType.CV_64FC1 );
|
||||
c1Mo.put(0 ,0 ,0.9659258723258972, 0.2588190734386444, 0.0, 1.5529145002365112,
|
||||
0.08852133899927139, -0.3303661346435547, -0.9396926164627075, -0.10281121730804443,
|
||||
-0.24321036040782928, 0.9076734185218811, -0.342020183801651, 6.130080699920654,
|
||||
0, 0, 0, 1 );
|
||||
//! [camera-pose-from-Blender-at-location-1]
|
||||
|
||||
//! [camera-pose-from-Blender-at-location-2]
|
||||
Mat c2Mo = new Mat( 4, 4, CvType.CV_64FC1 );
|
||||
c2Mo.put(0, 0, 0.9659258723258972, -0.2588190734386444, 0.0, -1.5529145002365112,
|
||||
-0.08852133899927139, -0.3303661346435547, -0.9396926164627075, -0.10281121730804443,
|
||||
0.24321036040782928, 0.9076734185218811, -0.342020183801651, 6.130080699920654,
|
||||
0, 0, 0, 1);
|
||||
//! [camera-pose-from-Blender-at-location-2]
|
||||
|
||||
//! [camera-intrinsics-from-Blender]
|
||||
Mat cameraMatrix = new Mat(3, 3, CvType.CV_64FC1);
|
||||
cameraMatrix.put(0, 0, 700.0, 0.0, 320.0, 0.0, 700.0, 240.0, 0, 0, 1 );
|
||||
//! [camera-intrinsics-from-Blender]
|
||||
|
||||
//! [extract-rotation]
|
||||
Range rowRange = new Range(0,3);
|
||||
Range colRange = new Range(0,3);
|
||||
//! [extract-rotation]
|
||||
|
||||
//! [compute-rotation-displacement]
|
||||
//c1Mo * oMc2
|
||||
Mat R1 = new Mat(c1Mo, rowRange, colRange);
|
||||
Mat R2 = new Mat(c2Mo, rowRange, colRange);
|
||||
Mat R_2to1 = new Mat();
|
||||
Core.gemm(R1, R2.t(), 1, new Mat(), 0, R_2to1 );
|
||||
//! [compute-rotation-displacement]
|
||||
|
||||
//! [compute-homography]
|
||||
Mat tmp = new Mat(), H = new Mat();
|
||||
Core.gemm(cameraMatrix, R_2to1, 1, new Mat(), 0, tmp);
|
||||
Core.gemm(tmp, cameraMatrix.inv(), 1, new Mat(), 0, H);
|
||||
Scalar s = new Scalar(H.get(2, 2)[0]);
|
||||
Core.divide(H, s, H);
|
||||
System.out.println(H.dump());
|
||||
//! [compute-homography]
|
||||
|
||||
//! [stitch]
|
||||
Mat img_stitch = new Mat();
|
||||
Imgproc.warpPerspective(img2, img_stitch, H, new Size(img2.cols()*2, img2.rows()) );
|
||||
Mat half = new Mat();
|
||||
half = new Mat(img_stitch, new Rect(0, 0, img1.cols(), img1.rows()));
|
||||
img1.copyTo(half);
|
||||
//! [stitch]
|
||||
|
||||
Mat img_compare = new Mat();
|
||||
Mat img_space = Mat.zeros(new Size(50, img1.rows()), CvType.CV_8UC3);
|
||||
List<Mat>list = new ArrayList<>();
|
||||
list.add(img1);
|
||||
list.add(img_space);
|
||||
list.add(img2);
|
||||
Core.hconcat(list, img_compare);
|
||||
|
||||
HighGui.imshow("Compare Images", img_compare);
|
||||
HighGui.imshow("Panorama Stitching", img_stitch);
|
||||
HighGui.waitKey(0);
|
||||
System.exit(0);
|
||||
}
|
||||
}
|
||||
|
||||
public class PanoramaStitchingRotatingCamera {
|
||||
public static void main(String[] args) {
|
||||
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
|
||||
new PanoramaStitchingRotatingCameraRun().basicPanoramaStitching(args);
|
||||
}
|
||||
}
|
@ -0,0 +1,89 @@
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Random;
|
||||
|
||||
import org.opencv.core.*;
|
||||
import org.opencv.calib3d.Calib3d;
|
||||
import org.opencv.highgui.HighGui;
|
||||
import org.opencv.imgcodecs.Imgcodecs;
|
||||
import org.opencv.imgproc.Imgproc;
|
||||
|
||||
|
||||
class PerspectiveCorrectionRun {
|
||||
void perspectiveCorrection (String[] args) {
|
||||
String img1Path = args[0], img2Path = args[1];
|
||||
Mat img1 = Imgcodecs.imread(img1Path);
|
||||
Mat img2 = Imgcodecs.imread(img2Path);
|
||||
|
||||
//! [find-corners]
|
||||
MatOfPoint2f corners1 = new MatOfPoint2f(), corners2 = new MatOfPoint2f();
|
||||
boolean found1 = Calib3d.findChessboardCorners(img1, new Size(9, 6), corners1 );
|
||||
boolean found2 = Calib3d.findChessboardCorners(img2, new Size(9, 6), corners2 );
|
||||
//! [find-corners]
|
||||
|
||||
if (!found1 || !found2) {
|
||||
System.out.println("Error, cannot find the chessboard corners in both images.");
|
||||
System.exit(-1);
|
||||
}
|
||||
|
||||
//! [estimate-homography]
|
||||
Mat H = new Mat();
|
||||
H = Calib3d.findHomography(corners1, corners2);
|
||||
System.out.println(H.dump());
|
||||
//! [estimate-homography]
|
||||
|
||||
//! [warp-chessboard]
|
||||
Mat img1_warp = new Mat();
|
||||
Imgproc.warpPerspective(img1, img1_warp, H, img1.size());
|
||||
//! [warp-chessboard]
|
||||
|
||||
Mat img_draw_warp = new Mat();
|
||||
List<Mat> list1 = new ArrayList<>(), list2 = new ArrayList<>() ;
|
||||
list1.add(img2);
|
||||
list1.add(img1_warp);
|
||||
Core.hconcat(list1, img_draw_warp);
|
||||
HighGui.imshow("Desired chessboard view / Warped source chessboard view", img_draw_warp);
|
||||
|
||||
//! [compute-transformed-corners]
|
||||
Mat img_draw_matches = new Mat();
|
||||
list2.add(img1);
|
||||
list2.add(img2);
|
||||
Core.hconcat(list2, img_draw_matches);
|
||||
Point []corners1Arr = corners1.toArray();
|
||||
|
||||
for (int i = 0 ; i < corners1Arr.length; i++) {
|
||||
Mat pt1 = new Mat(3, 1, CvType.CV_64FC1), pt2 = new Mat();
|
||||
pt1.put(0, 0, corners1Arr[i].x, corners1Arr[i].y, 1 );
|
||||
|
||||
Core.gemm(H, pt1, 1, new Mat(), 0, pt2);
|
||||
double[] data = pt2.get(2, 0);
|
||||
Core.divide(pt2, new Scalar(data[0]), pt2);
|
||||
|
||||
double[] data1 =pt2.get(0, 0);
|
||||
double[] data2 = pt2.get(1, 0);
|
||||
Point end = new Point((int)(img1.cols()+ data1[0]), (int)data2[0]);
|
||||
Imgproc.line(img_draw_matches, corners1Arr[i], end, RandomColor(), 2);
|
||||
}
|
||||
|
||||
HighGui.imshow("Draw matches", img_draw_matches);
|
||||
HighGui.waitKey(0);
|
||||
//! [compute-transformed-corners]
|
||||
|
||||
System.exit(0);
|
||||
}
|
||||
|
||||
Scalar RandomColor () {
|
||||
Random rng = new Random();
|
||||
int r = rng.nextInt(256);
|
||||
int g = rng.nextInt(256);
|
||||
int b = rng.nextInt(256);
|
||||
return new Scalar(r, g, b);
|
||||
}
|
||||
}
|
||||
|
||||
public class PerspectiveCorrection {
|
||||
public static void main (String[] args) {
|
||||
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
|
||||
new PerspectiveCorrectionRun().perspectiveCorrection(args);
|
||||
}
|
||||
}
|
@ -0,0 +1,71 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Python 2/3 compatibility
|
||||
from __future__ import print_function
|
||||
|
||||
import numpy as np
|
||||
import cv2 as cv
|
||||
|
||||
def basicPanoramaStitching(img1Path, img2Path):
|
||||
img1 = cv.imread(cv.samples.findFile(img1Path))
|
||||
img2 = cv.imread(cv.samples.findFile(img2Path))
|
||||
|
||||
# [camera-pose-from-Blender-at-location-1]
|
||||
c1Mo = np.array([[0.9659258723258972, 0.2588190734386444, 0.0, 1.5529145002365112],
|
||||
[ 0.08852133899927139, -0.3303661346435547, -0.9396926164627075, -0.10281121730804443],
|
||||
[-0.24321036040782928, 0.9076734185218811, -0.342020183801651, 6.130080699920654],
|
||||
[0, 0, 0, 1]],dtype=np.float64)
|
||||
# [camera-pose-from-Blender-at-location-1]
|
||||
|
||||
# [camera-pose-from-Blender-at-location-2]
|
||||
c2Mo = np.array([[0.9659258723258972, -0.2588190734386444, 0.0, -1.5529145002365112],
|
||||
[-0.08852133899927139, -0.3303661346435547, -0.9396926164627075, -0.10281121730804443],
|
||||
[0.24321036040782928, 0.9076734185218811, -0.342020183801651, 6.130080699920654],
|
||||
[0, 0, 0, 1]],dtype=np.float64)
|
||||
# [camera-pose-from-Blender-at-location-2]
|
||||
|
||||
# [camera-intrinsics-from-Blender]
|
||||
cameraMatrix = np.array([[700.0, 0.0, 320.0], [0.0, 700.0, 240.0], [0, 0, 1]], dtype=np.float32)
|
||||
# [camera-intrinsics-from-Blender]
|
||||
|
||||
# [extract-rotation]
|
||||
R1 = c1Mo[0:3, 0:3]
|
||||
R2 = c2Mo[0:3, 0:3]
|
||||
#[extract-rotation]
|
||||
|
||||
# [compute-rotation-displacement]
|
||||
R2 = R2.transpose()
|
||||
R_2to1 = np.dot(R1,R2)
|
||||
# [compute-rotation-displacement]
|
||||
|
||||
# [compute-homography]
|
||||
H = cameraMatrix.dot(R_2to1).dot(np.linalg.inv(cameraMatrix))
|
||||
H = H / H[2][2]
|
||||
# [compute-homography]
|
||||
|
||||
# [stitch]
|
||||
img_stitch = cv.warpPerspective(img2, H, (img2.shape[1]*2, img2.shape[0]))
|
||||
img_stitch[0:img1.shape[0], 0:img1.shape[1]] = img1
|
||||
# [stitch]
|
||||
|
||||
img_space = np.zeros((img1.shape[0],50,3), dtype=np.uint8)
|
||||
img_compare = cv.hconcat([img1,img_space, img2])
|
||||
|
||||
cv.imshow("Final", img_compare)
|
||||
cv.imshow("Panorama", img_stitch)
|
||||
cv.waitKey(0)
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser(description="Code for homography tutorial. Example 5: basic panorama stitching from a rotating camera.")
|
||||
parser.add_argument("-I1","--image1", help = "path to first image", default="Blender_Suzanne1.jpg")
|
||||
parser.add_argument("-I2","--image2", help = "path to second image", default="Blender_Suzanne2.jpg")
|
||||
args = parser.parse_args()
|
||||
print("Panorama Stitching Started")
|
||||
basicPanoramaStitching(args.image1, args.image2)
|
||||
print("Panorama Stitching Completed Successfully")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Python 2/3 compatibility
|
||||
from __future__ import print_function
|
||||
|
||||
import numpy as np
|
||||
import cv2 as cv
|
||||
import sys
|
||||
|
||||
|
||||
def randomColor():
|
||||
color = np.random.randint(0, 255,(1, 3))
|
||||
return color[0].tolist()
|
||||
|
||||
def perspectiveCorrection(img1Path, img2Path ,patternSize ):
|
||||
img1 = cv.imread(cv.samples.findFile(img1Path))
|
||||
img2 = cv.imread(cv.samples.findFile(img2Path))
|
||||
|
||||
# [find-corners]
|
||||
ret1, corners1 = cv.findChessboardCorners(img1, patternSize)
|
||||
ret2, corners2 = cv.findChessboardCorners(img2, patternSize)
|
||||
# [find-corners]
|
||||
|
||||
if not ret1 or not ret2:
|
||||
print("Error, cannot find the chessboard corners in both images.")
|
||||
sys.exit(-1)
|
||||
|
||||
# [estimate-homography]
|
||||
H, _ = cv.findHomography(corners1, corners2)
|
||||
print(H)
|
||||
# [estimate-homography]
|
||||
|
||||
# [warp-chessboard]
|
||||
img1_warp = cv.warpPerspective(img1, H, (img1.shape[1], img1.shape[0]))
|
||||
# [warp-chessboard]
|
||||
|
||||
img_draw_warp = cv.hconcat([img2, img1_warp])
|
||||
cv.imshow("Desired chessboard view / Warped source chessboard view", img_draw_warp )
|
||||
|
||||
corners1 = corners1.tolist()
|
||||
corners1 = [a[0] for a in corners1]
|
||||
|
||||
# [compute-transformed-corners]
|
||||
img_draw_matches = cv.hconcat([img1, img2])
|
||||
for i in range(len(corners1)):
|
||||
pt1 = np.array([corners1[i][0], corners1[i][1], 1])
|
||||
pt1 = pt1.reshape(3, 1)
|
||||
pt2 = np.dot(H, pt1)
|
||||
pt2 = pt2/pt2[2]
|
||||
end = (int(img1.shape[1] + pt2[0]), int(pt2[1]))
|
||||
cv.line(img_draw_matches, tuple([int(j) for j in corners1[i]]), end, randomColor(), 2)
|
||||
|
||||
cv.imshow("Draw matches", img_draw_matches)
|
||||
cv.waitKey(0)
|
||||
# [compute-transformed-corners]
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('-I1', "--image1", help="Path to the first image", default="left02.jpg")
|
||||
parser.add_argument('-I2', "--image2", help="Path to the second image", default="left01.jpg")
|
||||
parser.add_argument('-H', "--height", help="Height of pattern size", default=6)
|
||||
parser.add_argument('-W', "--width", help="Width of pattern size", default=9)
|
||||
args = parser.parse_args()
|
||||
|
||||
img1Path = args.image1
|
||||
img2Path = args.image2
|
||||
h = args.height
|
||||
w = args.width
|
||||
perspectiveCorrection(img1Path, img2Path, (w, h))
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
Loading…
Reference in New Issue
Block a user