opencv/samples/cpp/calibration.cpp
Wenfeng CAI 31be03a805 Merge pull request #12772 from xoox:calib-release-object
More accurate pinhole camera calibration with imperfect planar target (#12772)
43 commits:

* Add derivatives with respect to object points

Add an output parameter to calculate derivatives of image points with
respect to 3D coordinates of object points. The output jacobian matrix
is a 2Nx3N matrix where N is the number of points.

This commit introduces incompatibility to old function signature.

* Set zero for dpdo matrix before using

dpdo is a sparse matrix with only non-zero value close to major
diagonal. Set it to zero because only elements near major diagonal are
computed.

* Add jacobian columns to projectPoints()

The output jacobian matrix of derivatives with respect to coordinates of
3D object points are added. This might break callers who assume the
columns of jacobian matrix.

* Adapt test code to updated project functions

The test cases for projectPoints() and cvProjectPoints2() are updated to
fit new function signatures.

* Add accuracy test code for dpdo

* Add badarg test for dpdo

* Add new enum item for new calibration method

CALIB_RELEASE_OBJECT is used to whether to release 3D coordinates of
object points. The method was proposed in: K. H. Strobl and G. Hirzinger.
"More Accurate Pinhole Camera Calibration with Imperfect Planar Target".
In Proceedings of the IEEE International Conference on Computer Vision
(ICCV 2011), 1st IEEE Workshop on Challenges and Opportunities in Robot
Perception, Barcelona, Spain, pp. 1068-1075, November 2011.

* Add releasing object method into internal function

It's a simple extension of the standard calibration scheme. We choose to
fix the first and last object point and a user-selected fixed point.

* Add interfaces for extended calibration method

* Refine document for calibrateCamera()

When releasing object points, only the z coordinates of the
objectPoints[0].back is fixed.

* Add link to strobl2011iccv paper

* Improve documentation for calibrateCamera()

* Add implementations of wrapping calibrateCamera()

* Add checking for params of new calibration method

If input parameters are not qualified, then fall back to standard
calibration method.

* Add camera calibration method of releasing object

The current implementation is equal to or better than
https://github.com/xoox/calibrel

* Update doc for CALIB_RELEASE_OBJECT

CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with
potentially less precise and less stable in some rare cases.

* Add RELEASE_OBJECT calibration to tutorial code

To select the calibration method of releasing object points, a command
line parameter `-d=<number>` should be provided.

* Update tutorial doc for camera_calibration

If the method of releasing object points is merged into OpenCV. It will
be expected to be firstly released in 4.1, I think.

* Reduce epsilon for cornerSubPix()

Epsilon of 0.1 is a bigger one. Preciser corner positions are required
with calibration method of releasing object.

* Refine camera calibration tutorial

The hypothesis coordinates are used to indicate which distance must be
measured between two specified object points.

* Update sample calibration code method selection

Similar to camera_calibration tutorial application, a command line
argument `-dt=<number>` is used to select the calibration method.

* Add guard to flags of cvCalibrateCamera2()

cvCalibrateCamera2() doesn't accept CALIB_RELEASE_OBJECT unless overload
interface is added in the future.

* Simplify fallback when iFixedPoint is out of range

* Refactor projectPoints() to keep compatibilities

* Fix arg string "Bad rvecs header"

* Read calibration flags from test data files

Instead of being hard coded into source file, the calibration flags will
be read from test data files.
opencv_extra/testdata/cv/cameracalibration/calib?.dat must be sync with
the test code.

* Add new C interface of cvCalibrateCamera4()

With this new added C interface, the extended calibration method with
CALIB_RELEASE_OBJECT can be called by C API.

* Add regression test of extended calibration method

It has been tested with new added test data in xoox:calib-release-object
branch of opencv_extra.

* Fix assertion in test_cameracalibration.cpp

The total number of refined 3D object coordinates is checked.

* Add checker for iFixedPoint in cvCalibrateCamera4

If iFixedPoint is out of rational range, fall back to standard method.

* Fix documentation for overloaded calibrateCamera()

* Remove calibration flag of CALIB_RELEASE_OBJECT

The method selection is based on the range of the index of fixed point.
For minus values, standard calibration method will be chosen.  Values in
a rational range will make the object-releasing calibration method
selected.

* Use new interfaces instead of function overload

Existing interfaces are preserved and new interfaces are added. Since
most part of the code base are shared, calibrateCamera() is now a
wrapper function of calibrateCameraRO().

* Fix exported name of calibrateCameraRO()

* Update documentation for calibrateCameraRO()

The circumstances where this method is mostly helpful are described.

* Add note on the rigidity of the calibration target

* Update documentation for calibrateCameraRO()

It is clarified that iFixedPoint is used as a switch to select
calibration method. If input data are not qualified, exceptions will be
thrown instead of fallback scheme.

* Clarify iFixedPoint as switch and remove fallback

iFixedPoint is now used as a switch for calibration method selection. No
fallback scheme is utilized anymore. If the input data are not
qualified, exceptions will be thrown.

* Add badarg test for object-releasing method

* Fix document format of sample list

List items of same level should be indented the same way. Otherwise they
will be formatted as nested lists by Doxygen.

* Add brief intro for objectPoints and imagePoints

* Sync tutorial to sample calibration code

* Update tutorial compatibility version to 4.0
2018-10-25 19:38:55 +03:00

591 lines
21 KiB
C++

#include "opencv2/core.hpp"
#include <opencv2/core/utility.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include <cctype>
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <iostream>
using namespace cv;
using namespace std;
const char * usage =
" \nexample command line for calibration from a live feed.\n"
" calibration -w=4 -h=5 -s=0.025 -o=camera.yml -op -oe\n"
" \n"
" example command line for calibration from a list of stored images:\n"
" imagelist_creator image_list.xml *.png\n"
" calibration -w=4 -h=5 -s=0.025 -o=camera.yml -op -oe image_list.xml\n"
" where image_list.xml is the standard OpenCV XML/YAML\n"
" use imagelist_creator to create the xml or yaml list\n"
" file consisting of the list of strings, e.g.:\n"
" \n"
"<?xml version=\"1.0\"?>\n"
"<opencv_storage>\n"
"<images>\n"
"view000.png\n"
"view001.png\n"
"<!-- view002.png -->\n"
"view003.png\n"
"view010.png\n"
"one_extra_view.jpg\n"
"</images>\n"
"</opencv_storage>\n";
const char* liveCaptureHelp =
"When the live video from camera is used as input, the following hot-keys may be used:\n"
" <ESC>, 'q' - quit the program\n"
" 'g' - start capturing images\n"
" 'u' - switch undistortion on/off\n";
static void help()
{
printf( "This is a camera calibration sample.\n"
"Usage: calibration\n"
" -w=<board_width> # the number of inner corners per one of board dimension\n"
" -h=<board_height> # the number of inner corners per another board dimension\n"
" [-pt=<pattern>] # the type of pattern: chessboard or circles' grid\n"
" [-n=<number_of_frames>] # the number of frames to use for calibration\n"
" # (if not specified, it will be set to the number\n"
" # of board views actually available)\n"
" [-d=<delay>] # a minimum delay in ms between subsequent attempts to capture a next view\n"
" # (used only for video capturing)\n"
" [-s=<squareSize>] # square size in some user-defined units (1 by default)\n"
" [-o=<out_camera_params>] # the output filename for intrinsic [and extrinsic] parameters\n"
" [-op] # write detected feature points\n"
" [-oe] # write extrinsic parameters\n"
" [-oo] # write refined 3D object points\n"
" [-zt] # assume zero tangential distortion\n"
" [-a=<aspectRatio>] # fix aspect ratio (fx/fy)\n"
" [-p] # fix the principal point at the center\n"
" [-v] # flip the captured images around the horizontal axis\n"
" [-V] # use a video file, and not an image list, uses\n"
" # [input_data] string for the video file name\n"
" [-su] # show undistorted images after calibration\n"
" [-ws=<number_of_pixel>] # Half of search window for cornerSubPix (11 by default)\n"
" [-dt=<distance>] # actual distance between top-left and top-right corners of\n"
" # the calibration grid. If this parameter is specified, a more\n"
" # accurate calibration method will be used which may be better\n"
" # with inaccurate, roughly planar target.\n"
" [input_data] # input data, one of the following:\n"
" # - text file with a list of the images of the board\n"
" # the text file can be generated with imagelist_creator\n"
" # - name of video file with a video of the board\n"
" # if input_data not specified, a live view from the camera is used\n"
"\n" );
printf("\n%s",usage);
printf( "\n%s", liveCaptureHelp );
}
enum { DETECTION = 0, CAPTURING = 1, CALIBRATED = 2 };
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static double computeReprojectionErrors(
const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints,
const vector<Mat>& rvecs, const vector<Mat>& tvecs,
const Mat& cameraMatrix, const Mat& distCoeffs,
vector<float>& perViewErrors )
{
vector<Point2f> imagePoints2;
int i, totalPoints = 0;
double totalErr = 0, err;
perViewErrors.resize(objectPoints.size());
for( i = 0; i < (int)objectPoints.size(); i++ )
{
projectPoints(Mat(objectPoints[i]), rvecs[i], tvecs[i],
cameraMatrix, distCoeffs, imagePoints2);
err = norm(Mat(imagePoints[i]), Mat(imagePoints2), NORM_L2);
int n = (int)objectPoints[i].size();
perViewErrors[i] = (float)std::sqrt(err*err/n);
totalErr += err*err;
totalPoints += n;
}
return std::sqrt(totalErr/totalPoints);
}
static void calcChessboardCorners(Size boardSize, float squareSize, vector<Point3f>& corners, Pattern patternType = CHESSBOARD)
{
corners.resize(0);
switch(patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(Error::StsBadArg, "Unknown pattern type\n");
}
}
static bool runCalibration( vector<vector<Point2f> > imagePoints,
Size imageSize, Size boardSize, Pattern patternType,
float squareSize, float aspectRatio,
float grid_width, bool release_object,
int flags, Mat& cameraMatrix, Mat& distCoeffs,
vector<Mat>& rvecs, vector<Mat>& tvecs,
vector<float>& reprojErrs,
vector<Point3f>& newObjPoints,
double& totalAvgErr)
{
cameraMatrix = Mat::eye(3, 3, CV_64F);
if( flags & CALIB_FIX_ASPECT_RATIO )
cameraMatrix.at<double>(0,0) = aspectRatio;
distCoeffs = Mat::zeros(8, 1, CV_64F);
vector<vector<Point3f> > objectPoints(1);
calcChessboardCorners(boardSize, squareSize, objectPoints[0], patternType);
objectPoints[0][boardSize.width - 1].x = objectPoints[0][0].x + grid_width;
newObjPoints = objectPoints[0];
objectPoints.resize(imagePoints.size(),objectPoints[0]);
double rms;
int iFixedPoint = -1;
if (release_object)
iFixedPoint = boardSize.width - 1;
rms = calibrateCameraRO(objectPoints, imagePoints, imageSize, iFixedPoint,
cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints,
flags | CALIB_FIX_K3 | CALIB_USE_LU);
printf("RMS error reported by calibrateCamera: %g\n", rms);
bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs);
if (release_object) {
cout << "New board corners: " << endl;
cout << newObjPoints[0] << endl;
cout << newObjPoints[boardSize.width - 1] << endl;
cout << newObjPoints[boardSize.width * (boardSize.height - 1)] << endl;
cout << newObjPoints.back() << endl;
}
objectPoints.clear();
objectPoints.resize(imagePoints.size(), newObjPoints);
totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints,
rvecs, tvecs, cameraMatrix, distCoeffs, reprojErrs);
return ok;
}
static void saveCameraParams( const string& filename,
Size imageSize, Size boardSize,
float squareSize, float aspectRatio, int flags,
const Mat& cameraMatrix, const Mat& distCoeffs,
const vector<Mat>& rvecs, const vector<Mat>& tvecs,
const vector<float>& reprojErrs,
const vector<vector<Point2f> >& imagePoints,
const vector<Point3f>& newObjPoints,
double totalAvgErr )
{
FileStorage fs( filename, FileStorage::WRITE );
time_t tt;
time( &tt );
struct tm *t2 = localtime( &tt );
char buf[1024];
strftime( buf, sizeof(buf)-1, "%c", t2 );
fs << "calibration_time" << buf;
if( !rvecs.empty() || !reprojErrs.empty() )
fs << "nframes" << (int)std::max(rvecs.size(), reprojErrs.size());
fs << "image_width" << imageSize.width;
fs << "image_height" << imageSize.height;
fs << "board_width" << boardSize.width;
fs << "board_height" << boardSize.height;
fs << "square_size" << squareSize;
if( flags & CALIB_FIX_ASPECT_RATIO )
fs << "aspectRatio" << aspectRatio;
if( flags != 0 )
{
sprintf( buf, "flags: %s%s%s%s",
flags & CALIB_USE_INTRINSIC_GUESS ? "+use_intrinsic_guess" : "",
flags & CALIB_FIX_ASPECT_RATIO ? "+fix_aspectRatio" : "",
flags & CALIB_FIX_PRINCIPAL_POINT ? "+fix_principal_point" : "",
flags & CALIB_ZERO_TANGENT_DIST ? "+zero_tangent_dist" : "" );
//cvWriteComment( *fs, buf, 0 );
}
fs << "flags" << flags;
fs << "camera_matrix" << cameraMatrix;
fs << "distortion_coefficients" << distCoeffs;
fs << "avg_reprojection_error" << totalAvgErr;
if( !reprojErrs.empty() )
fs << "per_view_reprojection_errors" << Mat(reprojErrs);
if( !rvecs.empty() && !tvecs.empty() )
{
CV_Assert(rvecs[0].type() == tvecs[0].type());
Mat bigmat((int)rvecs.size(), 6, rvecs[0].type());
for( int i = 0; i < (int)rvecs.size(); i++ )
{
Mat r = bigmat(Range(i, i+1), Range(0,3));
Mat t = bigmat(Range(i, i+1), Range(3,6));
CV_Assert(rvecs[i].rows == 3 && rvecs[i].cols == 1);
CV_Assert(tvecs[i].rows == 3 && tvecs[i].cols == 1);
//*.t() is MatExpr (not Mat) so we can use assignment operator
r = rvecs[i].t();
t = tvecs[i].t();
}
//cvWriteComment( *fs, "a set of 6-tuples (rotation vector + translation vector) for each view", 0 );
fs << "extrinsic_parameters" << bigmat;
}
if( !imagePoints.empty() )
{
Mat imagePtMat((int)imagePoints.size(), (int)imagePoints[0].size(), CV_32FC2);
for( int i = 0; i < (int)imagePoints.size(); i++ )
{
Mat r = imagePtMat.row(i).reshape(2, imagePtMat.cols);
Mat imgpti(imagePoints[i]);
imgpti.copyTo(r);
}
fs << "image_points" << imagePtMat;
}
if( !newObjPoints.empty() )
{
fs << "grid_points" << newObjPoints;
}
}
static bool readStringList( const string& filename, vector<string>& l )
{
l.resize(0);
FileStorage fs(filename, FileStorage::READ);
if( !fs.isOpened() )
return false;
FileNode n = fs.getFirstTopLevelNode();
if( n.type() != FileNode::SEQ )
return false;
FileNodeIterator it = n.begin(), it_end = n.end();
for( ; it != it_end; ++it )
l.push_back((string)*it);
return true;
}
static bool runAndSave(const string& outputFilename,
const vector<vector<Point2f> >& imagePoints,
Size imageSize, Size boardSize, Pattern patternType, float squareSize,
float grid_width, bool release_object,
float aspectRatio, int flags, Mat& cameraMatrix,
Mat& distCoeffs, bool writeExtrinsics, bool writePoints, bool writeGrid )
{
vector<Mat> rvecs, tvecs;
vector<float> reprojErrs;
double totalAvgErr = 0;
vector<Point3f> newObjPoints;
bool ok = runCalibration(imagePoints, imageSize, boardSize, patternType, squareSize,
aspectRatio, grid_width, release_object, flags, cameraMatrix, distCoeffs,
rvecs, tvecs, reprojErrs, newObjPoints, totalAvgErr);
printf("%s. avg reprojection error = %.7f\n",
ok ? "Calibration succeeded" : "Calibration failed",
totalAvgErr);
if( ok )
saveCameraParams( outputFilename, imageSize,
boardSize, squareSize, aspectRatio,
flags, cameraMatrix, distCoeffs,
writeExtrinsics ? rvecs : vector<Mat>(),
writeExtrinsics ? tvecs : vector<Mat>(),
writeExtrinsics ? reprojErrs : vector<float>(),
writePoints ? imagePoints : vector<vector<Point2f> >(),
writeGrid ? newObjPoints : vector<Point3f>(),
totalAvgErr );
return ok;
}
int main( int argc, char** argv )
{
Size boardSize, imageSize;
float squareSize, aspectRatio = 1;
Mat cameraMatrix, distCoeffs;
string outputFilename;
string inputFilename = "";
int i, nframes;
bool writeExtrinsics, writePoints;
bool undistortImage = false;
int flags = 0;
VideoCapture capture;
bool flipVertical;
bool showUndistorted;
bool videofile;
int delay;
clock_t prevTimestamp = 0;
int mode = DETECTION;
int cameraId = 0;
vector<vector<Point2f> > imagePoints;
vector<string> imageList;
Pattern pattern = CHESSBOARD;
cv::CommandLineParser parser(argc, argv,
"{help ||}{w||}{h||}{pt|chessboard|}{n|10|}{d|1000|}{s|1|}{o|out_camera_data.yml|}"
"{op||}{oe||}{zt||}{a||}{p||}{v||}{V||}{su||}"
"{oo||}{ws|11|}{dt||}"
"{@input_data|0|}");
if (parser.has("help"))
{
help();
return 0;
}
boardSize.width = parser.get<int>( "w" );
boardSize.height = parser.get<int>( "h" );
if ( parser.has("pt") )
{
string val = parser.get<string>("pt");
if( val == "circles" )
pattern = CIRCLES_GRID;
else if( val == "acircles" )
pattern = ASYMMETRIC_CIRCLES_GRID;
else if( val == "chessboard" )
pattern = CHESSBOARD;
else
return fprintf( stderr, "Invalid pattern type: must be chessboard or circles\n" ), -1;
}
squareSize = parser.get<float>("s");
nframes = parser.get<int>("n");
delay = parser.get<int>("d");
writePoints = parser.has("op");
writeExtrinsics = parser.has("oe");
bool writeGrid = parser.has("oo");
if (parser.has("a")) {
flags |= CALIB_FIX_ASPECT_RATIO;
aspectRatio = parser.get<float>("a");
}
if ( parser.has("zt") )
flags |= CALIB_ZERO_TANGENT_DIST;
if ( parser.has("p") )
flags |= CALIB_FIX_PRINCIPAL_POINT;
flipVertical = parser.has("v");
videofile = parser.has("V");
if ( parser.has("o") )
outputFilename = parser.get<string>("o");
showUndistorted = parser.has("su");
if ( isdigit(parser.get<string>("@input_data")[0]) )
cameraId = parser.get<int>("@input_data");
else
inputFilename = parser.get<string>("@input_data");
int winSize = parser.get<int>("ws");
float grid_width = squareSize * (boardSize.width - 1);
bool release_object = false;
if (parser.has("dt")) {
grid_width = parser.get<float>("dt");
release_object = true;
}
if (!parser.check())
{
help();
parser.printErrors();
return -1;
}
if ( squareSize <= 0 )
return fprintf( stderr, "Invalid board square width\n" ), -1;
if ( nframes <= 3 )
return printf("Invalid number of images\n" ), -1;
if ( aspectRatio <= 0 )
return printf( "Invalid aspect ratio\n" ), -1;
if ( delay <= 0 )
return printf( "Invalid delay\n" ), -1;
if ( boardSize.width <= 0 )
return fprintf( stderr, "Invalid board width\n" ), -1;
if ( boardSize.height <= 0 )
return fprintf( stderr, "Invalid board height\n" ), -1;
if( !inputFilename.empty() )
{
if( !videofile && readStringList(inputFilename, imageList) )
mode = CAPTURING;
else
capture.open(inputFilename);
}
else
capture.open(cameraId);
if( !capture.isOpened() && imageList.empty() )
return fprintf( stderr, "Could not initialize video (%d) capture\n",cameraId ), -2;
if( !imageList.empty() )
nframes = (int)imageList.size();
if( capture.isOpened() )
printf( "%s", liveCaptureHelp );
namedWindow( "Image View", 1 );
for(i = 0;;i++)
{
Mat view, viewGray;
bool blink = false;
if( capture.isOpened() )
{
Mat view0;
capture >> view0;
view0.copyTo(view);
}
else if( i < (int)imageList.size() )
view = imread(imageList[i], 1);
if(view.empty())
{
if( imagePoints.size() > 0 )
runAndSave(outputFilename, imagePoints, imageSize,
boardSize, pattern, squareSize, grid_width, release_object, aspectRatio,
flags, cameraMatrix, distCoeffs,
writeExtrinsics, writePoints, writeGrid);
break;
}
imageSize = view.size();
if( flipVertical )
flip( view, view, 0 );
vector<Point2f> pointbuf;
cvtColor(view, viewGray, COLOR_BGR2GRAY);
bool found;
switch( pattern )
{
case CHESSBOARD:
found = findChessboardCorners( view, boardSize, pointbuf,
CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK | CALIB_CB_NORMALIZE_IMAGE);
break;
case CIRCLES_GRID:
found = findCirclesGrid( view, boardSize, pointbuf );
break;
case ASYMMETRIC_CIRCLES_GRID:
found = findCirclesGrid( view, boardSize, pointbuf, CALIB_CB_ASYMMETRIC_GRID );
break;
default:
return fprintf( stderr, "Unknown pattern type\n" ), -1;
}
// improve the found corners' coordinate accuracy
if( pattern == CHESSBOARD && found) cornerSubPix( viewGray, pointbuf, Size(winSize,winSize),
Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.0001 ));
if( mode == CAPTURING && found &&
(!capture.isOpened() || clock() - prevTimestamp > delay*1e-3*CLOCKS_PER_SEC) )
{
imagePoints.push_back(pointbuf);
prevTimestamp = clock();
blink = capture.isOpened();
}
if(found)
drawChessboardCorners( view, boardSize, Mat(pointbuf), found );
string msg = mode == CAPTURING ? "100/100" :
mode == CALIBRATED ? "Calibrated" : "Press 'g' to start";
int baseLine = 0;
Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);
Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);
if( mode == CAPTURING )
{
if(undistortImage)
msg = format( "%d/%d Undist", (int)imagePoints.size(), nframes );
else
msg = format( "%d/%d", (int)imagePoints.size(), nframes );
}
putText( view, msg, textOrigin, 1, 1,
mode != CALIBRATED ? Scalar(0,0,255) : Scalar(0,255,0));
if( blink )
bitwise_not(view, view);
if( mode == CALIBRATED && undistortImage )
{
Mat temp = view.clone();
undistort(temp, view, cameraMatrix, distCoeffs);
}
imshow("Image View", view);
char key = (char)waitKey(capture.isOpened() ? 50 : 500);
if( key == 27 )
break;
if( key == 'u' && mode == CALIBRATED )
undistortImage = !undistortImage;
if( capture.isOpened() && key == 'g' )
{
mode = CAPTURING;
imagePoints.clear();
}
if( mode == CAPTURING && imagePoints.size() >= (unsigned)nframes )
{
if( runAndSave(outputFilename, imagePoints, imageSize,
boardSize, pattern, squareSize, grid_width, release_object, aspectRatio,
flags, cameraMatrix, distCoeffs,
writeExtrinsics, writePoints, writeGrid))
mode = CALIBRATED;
else
mode = DETECTION;
if( !capture.isOpened() )
break;
}
}
if( !capture.isOpened() && showUndistorted )
{
Mat view, rview, map1, map2;
initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),
getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0),
imageSize, CV_16SC2, map1, map2);
for( i = 0; i < (int)imageList.size(); i++ )
{
view = imread(imageList[i], 1);
if(view.empty())
continue;
//undistort( view, rview, cameraMatrix, distCoeffs, cameraMatrix );
remap(view, rview, map1, map2, INTER_LINEAR);
imshow("Image View", rview);
char c = (char)waitKey();
if( c == 27 || c == 'q' || c == 'Q' )
break;
}
}
return 0;
}