opencv/modules/calib3d/test/test_cameracalibration.cpp
Wenfeng CAI 31be03a805 Merge pull request #12772 from xoox:calib-release-object
More accurate pinhole camera calibration with imperfect planar target (#12772)
43 commits:

* Add derivatives with respect to object points

Add an output parameter to calculate derivatives of image points with
respect to 3D coordinates of object points. The output jacobian matrix
is a 2Nx3N matrix where N is the number of points.

This commit introduces incompatibility to old function signature.

* Set zero for dpdo matrix before using

dpdo is a sparse matrix with only non-zero value close to major
diagonal. Set it to zero because only elements near major diagonal are
computed.

* Add jacobian columns to projectPoints()

The output jacobian matrix of derivatives with respect to coordinates of
3D object points are added. This might break callers who assume the
columns of jacobian matrix.

* Adapt test code to updated project functions

The test cases for projectPoints() and cvProjectPoints2() are updated to
fit new function signatures.

* Add accuracy test code for dpdo

* Add badarg test for dpdo

* Add new enum item for new calibration method

CALIB_RELEASE_OBJECT is used to whether to release 3D coordinates of
object points. The method was proposed in: K. H. Strobl and G. Hirzinger.
"More Accurate Pinhole Camera Calibration with Imperfect Planar Target".
In Proceedings of the IEEE International Conference on Computer Vision
(ICCV 2011), 1st IEEE Workshop on Challenges and Opportunities in Robot
Perception, Barcelona, Spain, pp. 1068-1075, November 2011.

* Add releasing object method into internal function

It's a simple extension of the standard calibration scheme. We choose to
fix the first and last object point and a user-selected fixed point.

* Add interfaces for extended calibration method

* Refine document for calibrateCamera()

When releasing object points, only the z coordinates of the
objectPoints[0].back is fixed.

* Add link to strobl2011iccv paper

* Improve documentation for calibrateCamera()

* Add implementations of wrapping calibrateCamera()

* Add checking for params of new calibration method

If input parameters are not qualified, then fall back to standard
calibration method.

* Add camera calibration method of releasing object

The current implementation is equal to or better than
https://github.com/xoox/calibrel

* Update doc for CALIB_RELEASE_OBJECT

CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with
potentially less precise and less stable in some rare cases.

* Add RELEASE_OBJECT calibration to tutorial code

To select the calibration method of releasing object points, a command
line parameter `-d=<number>` should be provided.

* Update tutorial doc for camera_calibration

If the method of releasing object points is merged into OpenCV. It will
be expected to be firstly released in 4.1, I think.

* Reduce epsilon for cornerSubPix()

Epsilon of 0.1 is a bigger one. Preciser corner positions are required
with calibration method of releasing object.

* Refine camera calibration tutorial

The hypothesis coordinates are used to indicate which distance must be
measured between two specified object points.

* Update sample calibration code method selection

Similar to camera_calibration tutorial application, a command line
argument `-dt=<number>` is used to select the calibration method.

* Add guard to flags of cvCalibrateCamera2()

cvCalibrateCamera2() doesn't accept CALIB_RELEASE_OBJECT unless overload
interface is added in the future.

* Simplify fallback when iFixedPoint is out of range

* Refactor projectPoints() to keep compatibilities

* Fix arg string "Bad rvecs header"

* Read calibration flags from test data files

Instead of being hard coded into source file, the calibration flags will
be read from test data files.
opencv_extra/testdata/cv/cameracalibration/calib?.dat must be sync with
the test code.

* Add new C interface of cvCalibrateCamera4()

With this new added C interface, the extended calibration method with
CALIB_RELEASE_OBJECT can be called by C API.

* Add regression test of extended calibration method

It has been tested with new added test data in xoox:calib-release-object
branch of opencv_extra.

* Fix assertion in test_cameracalibration.cpp

The total number of refined 3D object coordinates is checked.

* Add checker for iFixedPoint in cvCalibrateCamera4

If iFixedPoint is out of rational range, fall back to standard method.

* Fix documentation for overloaded calibrateCamera()

* Remove calibration flag of CALIB_RELEASE_OBJECT

The method selection is based on the range of the index of fixed point.
For minus values, standard calibration method will be chosen.  Values in
a rational range will make the object-releasing calibration method
selected.

* Use new interfaces instead of function overload

Existing interfaces are preserved and new interfaces are added. Since
most part of the code base are shared, calibrateCamera() is now a
wrapper function of calibrateCameraRO().

* Fix exported name of calibrateCameraRO()

* Update documentation for calibrateCameraRO()

The circumstances where this method is mostly helpful are described.

* Add note on the rigidity of the calibration target

* Update documentation for calibrateCameraRO()

It is clarified that iFixedPoint is used as a switch to select
calibration method. If input data are not qualified, exceptions will be
thrown instead of fallback scheme.

* Clarify iFixedPoint as switch and remove fallback

iFixedPoint is now used as a switch for calibration method selection. No
fallback scheme is utilized anymore. If the input data are not
qualified, exceptions will be thrown.

* Add badarg test for object-releasing method

* Fix document format of sample list

List items of same level should be indented the same way. Otherwise they
will be formatted as nested lists by Doxygen.

* Add brief intro for objectPoints and imagePoints

* Sync tutorial to sample calibration code

* Update tutorial compatibility version to 4.0
2018-10-25 19:38:55 +03:00

2309 lines
89 KiB
C++

/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
#include "opencv2/calib3d/calib3d_c.h"
namespace opencv_test { namespace {
#if 0
class CV_ProjectPointsTest : public cvtest::ArrayTest
{
public:
CV_ProjectPointsTest();
protected:
int read_params( CvFileStorage* fs );
void fill_array( int test_case_idx, int i, int j, Mat& arr );
int prepare_test_case( int test_case_idx );
void get_test_array_types_and_sizes( int test_case_idx, vector<vector<Size> >& sizes, vector<vector<int> >& types );
double get_success_error_level( int test_case_idx, int i, int j );
void run_func();
void prepare_to_validation( int );
bool calc_jacobians;
};
CV_ProjectPointsTest::CV_ProjectPointsTest()
: cvtest::ArrayTest( "3d-ProjectPoints", "cvProjectPoints2", "" )
{
test_array[INPUT].push_back(NULL); // rotation vector
test_array[OUTPUT].push_back(NULL); // rotation matrix
test_array[OUTPUT].push_back(NULL); // jacobian (J)
test_array[OUTPUT].push_back(NULL); // rotation vector (backward transform result)
test_array[OUTPUT].push_back(NULL); // inverse transform jacobian (J1)
test_array[OUTPUT].push_back(NULL); // J*J1 (or J1*J) == I(3x3)
test_array[REF_OUTPUT].push_back(NULL);
test_array[REF_OUTPUT].push_back(NULL);
test_array[REF_OUTPUT].push_back(NULL);
test_array[REF_OUTPUT].push_back(NULL);
test_array[REF_OUTPUT].push_back(NULL);
element_wise_relative_error = false;
calc_jacobians = false;
}
int CV_ProjectPointsTest::read_params( CvFileStorage* fs )
{
int code = cvtest::ArrayTest::read_params( fs );
return code;
}
void CV_ProjectPointsTest::get_test_array_types_and_sizes(
int /*test_case_idx*/, vector<vector<Size> >& sizes, vector<vector<int> >& types )
{
RNG& rng = ts->get_rng();
int depth = cvtest::randInt(rng) % 2 == 0 ? CV_32F : CV_64F;
int i, code;
code = cvtest::randInt(rng) % 3;
types[INPUT][0] = CV_MAKETYPE(depth, 1);
if( code == 0 )
{
sizes[INPUT][0] = cvSize(1,1);
types[INPUT][0] = CV_MAKETYPE(depth, 3);
}
else if( code == 1 )
sizes[INPUT][0] = cvSize(3,1);
else
sizes[INPUT][0] = cvSize(1,3);
sizes[OUTPUT][0] = cvSize(3, 3);
types[OUTPUT][0] = CV_MAKETYPE(depth, 1);
types[OUTPUT][1] = CV_MAKETYPE(depth, 1);
if( cvtest::randInt(rng) % 2 )
sizes[OUTPUT][1] = cvSize(3,9);
else
sizes[OUTPUT][1] = cvSize(9,3);
types[OUTPUT][2] = types[INPUT][0];
sizes[OUTPUT][2] = sizes[INPUT][0];
types[OUTPUT][3] = types[OUTPUT][1];
sizes[OUTPUT][3] = cvSize(sizes[OUTPUT][1].height, sizes[OUTPUT][1].width);
types[OUTPUT][4] = types[OUTPUT][1];
sizes[OUTPUT][4] = cvSize(3,3);
calc_jacobians = 1;//cvtest::randInt(rng) % 3 != 0;
if( !calc_jacobians )
sizes[OUTPUT][1] = sizes[OUTPUT][3] = sizes[OUTPUT][4] = cvSize(0,0);
for( i = 0; i < 5; i++ )
{
types[REF_OUTPUT][i] = types[OUTPUT][i];
sizes[REF_OUTPUT][i] = sizes[OUTPUT][i];
}
}
double CV_ProjectPointsTest::get_success_error_level( int /*test_case_idx*/, int /*i*/, int j )
{
return j == 4 ? 1e-2 : 1e-2;
}
void CV_ProjectPointsTest::fill_array( int /*test_case_idx*/, int /*i*/, int /*j*/, CvMat* arr )
{
double r[3], theta0, theta1, f;
CvMat _r = cvMat( arr->rows, arr->cols, CV_MAKETYPE(CV_64F,CV_MAT_CN(arr->type)), r );
RNG& rng = ts->get_rng();
r[0] = cvtest::randReal(rng)*CV_PI*2;
r[1] = cvtest::randReal(rng)*CV_PI*2;
r[2] = cvtest::randReal(rng)*CV_PI*2;
theta0 = sqrt(r[0]*r[0] + r[1]*r[1] + r[2]*r[2]);
theta1 = fmod(theta0, CV_PI*2);
if( theta1 > CV_PI )
theta1 = -(CV_PI*2 - theta1);
f = theta1/(theta0 ? theta0 : 1);
r[0] *= f;
r[1] *= f;
r[2] *= f;
cvTsConvert( &_r, arr );
}
int CV_ProjectPointsTest::prepare_test_case( int test_case_idx )
{
int code = cvtest::ArrayTest::prepare_test_case( test_case_idx );
return code;
}
void CV_ProjectPointsTest::run_func()
{
CvMat *v2m_jac = 0, *m2v_jac = 0;
if( calc_jacobians )
{
v2m_jac = &test_mat[OUTPUT][1];
m2v_jac = &test_mat[OUTPUT][3];
}
cvProjectPoints2( &test_mat[INPUT][0], &test_mat[OUTPUT][0], v2m_jac );
cvProjectPoints2( &test_mat[OUTPUT][0], &test_mat[OUTPUT][2], m2v_jac );
}
void CV_ProjectPointsTest::prepare_to_validation( int /*test_case_idx*/ )
{
const CvMat* vec = &test_mat[INPUT][0];
CvMat* m = &test_mat[REF_OUTPUT][0];
CvMat* vec2 = &test_mat[REF_OUTPUT][2];
CvMat* v2m_jac = 0, *m2v_jac = 0;
double theta0, theta1;
if( calc_jacobians )
{
v2m_jac = &test_mat[REF_OUTPUT][1];
m2v_jac = &test_mat[REF_OUTPUT][3];
}
cvTsProjectPoints( vec, m, v2m_jac );
cvTsProjectPoints( m, vec2, m2v_jac );
cvTsCopy( vec, vec2 );
theta0 = cvtest::norm( cvarrtomat(vec2), 0, CV_L2 );
theta1 = fmod( theta0, CV_PI*2 );
if( theta1 > CV_PI )
theta1 = -(CV_PI*2 - theta1);
cvScale( vec2, vec2, theta1/(theta0 ? theta0 : 1) );
if( calc_jacobians )
{
//cvInvert( v2m_jac, m2v_jac, CV_SVD );
if( cvtest::norm(cvarrtomat(&test_mat[OUTPUT][3]), 0, CV_C) < 1000 )
{
cvTsGEMM( &test_mat[OUTPUT][1], &test_mat[OUTPUT][3],
1, 0, 0, &test_mat[OUTPUT][4],
v2m_jac->rows == 3 ? 0 : CV_GEMM_A_T + CV_GEMM_B_T );
}
else
{
cvTsSetIdentity( &test_mat[OUTPUT][4], cvScalarAll(1.) );
cvTsCopy( &test_mat[REF_OUTPUT][2], &test_mat[OUTPUT][2] );
}
cvTsSetIdentity( &test_mat[REF_OUTPUT][4], cvScalarAll(1.) );
}
}
CV_ProjectPointsTest ProjectPoints_test;
#endif
// --------------------------------- CV_CameraCalibrationTest --------------------------------------------
class CV_CameraCalibrationTest : public cvtest::BaseTest
{
public:
CV_CameraCalibrationTest();
~CV_CameraCalibrationTest();
void clear();
protected:
int compare(double* val, double* refVal, int len,
double eps, const char* paramName);
virtual void calibrate( int imageCount, int* pointCounts,
CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints,
int iFixedPoint, double* distortionCoeffs, double* cameraMatrix, double* translationVectors,
double* rotationMatrices, double* newObjPoints, double *stdDevs, double* perViewErrors,
int flags ) = 0;
virtual void project( int pointCount, CvPoint3D64f* objectPoints,
double* rotationMatrix, double* translationVector,
double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints ) = 0;
void run(int);
};
CV_CameraCalibrationTest::CV_CameraCalibrationTest()
{
}
CV_CameraCalibrationTest::~CV_CameraCalibrationTest()
{
clear();
}
void CV_CameraCalibrationTest::clear()
{
cvtest::BaseTest::clear();
}
int CV_CameraCalibrationTest::compare(double* val, double* ref_val, int len,
double eps, const char* param_name )
{
return cvtest::cmpEps2_64f( ts, val, ref_val, len, eps, param_name );
}
void CV_CameraCalibrationTest::run( int start_from )
{
int code = cvtest::TS::OK;
cv::String filepath;
cv::String filename;
Size imageSize;
Size etalonSize;
int numImages;
CvPoint2D64f* imagePoints;
CvPoint3D64f* objectPoints;
CvPoint2D64f* reprojectPoints;
double* transVects;
double* rotMatrs;
double* newObjPoints;
double* stdDevs;
double* perViewErrors;
double* goodTransVects;
double* goodRotMatrs;
double* goodObjPoints;
double* goodPerViewErrors;
double* goodStdDevs;
double cameraMatrix[3*3];
double distortion[5]={0,0,0,0,0};
double goodDistortion[4];
int* numbers;
FILE* file = 0;
FILE* datafile = 0;
int i,j;
int currImage;
int currPoint;
int calibFlags;
char i_dat_file[100];
int numPoints;
int numTests;
int currTest;
imagePoints = 0;
objectPoints = 0;
reprojectPoints = 0;
numbers = 0;
transVects = 0;
rotMatrs = 0;
goodTransVects = 0;
goodRotMatrs = 0;
int progress = 0;
int values_read = -1;
filepath = cv::format("%scv/cameracalibration/", ts->get_data_path().c_str() );
filename = cv::format("%sdatafiles.txt", filepath.c_str() );
datafile = fopen( filename.c_str(), "r" );
if( datafile == 0 )
{
ts->printf( cvtest::TS::LOG, "Could not open file with list of test files: %s\n", filename.c_str() );
code = cvtest::TS::FAIL_MISSING_TEST_DATA;
goto _exit_;
}
values_read = fscanf(datafile,"%d",&numTests);
CV_Assert(values_read == 1);
for( currTest = start_from; currTest < numTests; currTest++ )
{
values_read = fscanf(datafile,"%s",i_dat_file);
CV_Assert(values_read == 1);
filename = cv::format("%s%s", filepath.c_str(), i_dat_file);
file = fopen(filename.c_str(),"r");
ts->update_context( this, currTest, true );
if( file == 0 )
{
ts->printf( cvtest::TS::LOG,
"Can't open current test file: %s\n",filename.c_str());
if( numTests == 1 )
{
code = cvtest::TS::FAIL_MISSING_TEST_DATA;
goto _exit_;
}
continue; // if there is more than one test, just skip the test
}
values_read = fscanf(file,"%d %d\n",&(imageSize.width),&(imageSize.height));
CV_Assert(values_read == 2);
if( imageSize.width <= 0 || imageSize.height <= 0 )
{
ts->printf( cvtest::TS::LOG, "Image size in test file is incorrect\n" );
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
goto _exit_;
}
/* Read etalon size */
values_read = fscanf(file,"%d %d\n",&(etalonSize.width),&(etalonSize.height));
CV_Assert(values_read == 2);
if( etalonSize.width <= 0 || etalonSize.height <= 0 )
{
ts->printf( cvtest::TS::LOG, "Pattern size in test file is incorrect\n" );
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
goto _exit_;
}
numPoints = etalonSize.width * etalonSize.height;
/* Read number of images */
values_read = fscanf(file,"%d\n",&numImages);
CV_Assert(values_read == 1);
if( numImages <=0 )
{
ts->printf( cvtest::TS::LOG, "Number of images in test file is incorrect\n");
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
goto _exit_;
}
/* Read calibration flags */
values_read = fscanf(file,"%d\n",&calibFlags);
CV_Assert(values_read == 1);
/* Read index of the fixed point */
int iFixedPoint;
values_read = fscanf(file,"%d\n",&iFixedPoint);
CV_Assert(values_read == 1);
/* Need to allocate memory */
imagePoints = (CvPoint2D64f*)cvAlloc( numPoints *
numImages * sizeof(CvPoint2D64f));
objectPoints = (CvPoint3D64f*)cvAlloc( numPoints *
numImages * sizeof(CvPoint3D64f));
reprojectPoints = (CvPoint2D64f*)cvAlloc( numPoints *
numImages * sizeof(CvPoint2D64f));
/* Alloc memory for numbers */
numbers = (int*)cvAlloc( numImages * sizeof(int));
/* Fill it by numbers of points of each image*/
for( currImage = 0; currImage < numImages; currImage++ )
{
numbers[currImage] = etalonSize.width * etalonSize.height;
}
/* Allocate memory for translate vectors and rotmatrixs*/
transVects = (double*)cvAlloc(3 * 1 * numImages * sizeof(double));
rotMatrs = (double*)cvAlloc(3 * 3 * numImages * sizeof(double));
newObjPoints = (double*)cvAlloc(3 * numbers[0] * sizeof(double));
stdDevs = (double*)cvAlloc((CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0])
* sizeof(double));
perViewErrors = (double*)cvAlloc(numImages * sizeof(double));
goodTransVects = (double*)cvAlloc(3 * 1 * numImages * sizeof(double));
goodRotMatrs = (double*)cvAlloc(3 * 3 * numImages * sizeof(double));
goodObjPoints = (double*)cvAlloc(3 * numbers[0] * sizeof(double));
goodPerViewErrors = (double*)cvAlloc(numImages * sizeof(double));
goodStdDevs = (double*)cvAlloc((CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0])
* sizeof(double));
/* Read object points */
i = 0;/* shift for current point */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( currPoint = 0; currPoint < numPoints; currPoint++ )
{
double x,y,z;
values_read = fscanf(file,"%lf %lf %lf\n",&x,&y,&z);
CV_Assert(values_read == 3);
(objectPoints+i)->x = x;
(objectPoints+i)->y = y;
(objectPoints+i)->z = z;
i++;
}
}
/* Read image points */
i = 0;/* shift for current point */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( currPoint = 0; currPoint < numPoints; currPoint++ )
{
double x,y;
values_read = fscanf(file,"%lf %lf\n",&x,&y);
CV_Assert(values_read == 2);
(imagePoints+i)->x = x;
(imagePoints+i)->y = y;
i++;
}
}
/* Read good data computed before */
/* Focal lengths */
double goodFcx,goodFcy;
values_read = fscanf(file,"%lf %lf",&goodFcx,&goodFcy);
CV_Assert(values_read == 2);
/* Principal points */
double goodCx,goodCy;
values_read = fscanf(file,"%lf %lf",&goodCx,&goodCy);
CV_Assert(values_read == 2);
/* Read distortion */
values_read = fscanf(file,"%lf",goodDistortion+0); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+1); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+2); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+3); CV_Assert(values_read == 1);
/* Read good Rot matrices */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( i = 0; i < 3; i++ )
for( j = 0; j < 3; j++ )
{
values_read = fscanf(file, "%lf", goodRotMatrs + currImage * 9 + j * 3 + i);
CV_Assert(values_read == 1);
}
}
/* Read good Trans vectors */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( i = 0; i < 3; i++ )
{
values_read = fscanf(file, "%lf", goodTransVects + currImage * 3 + i);
CV_Assert(values_read == 1);
}
}
bool releaseObject = iFixedPoint > 0 && iFixedPoint < numPoints - 1;
/* Read good refined 3D object points */
if( releaseObject )
{
for( i = 0; i < numbers[0]; i++ )
{
for( j = 0; j < 3; j++ )
{
values_read = fscanf(file, "%lf", goodObjPoints + i * 3 + j);
CV_Assert(values_read == 1);
}
}
}
/* Read good stdDeviations */
for (i = 0; i < CV_CALIB_NINTRINSIC + numImages*6; i++)
{
values_read = fscanf(file, "%lf", goodStdDevs + i);
CV_Assert(values_read == 1);
}
if( releaseObject )
{
for( i = CV_CALIB_NINTRINSIC + numImages*6; i < CV_CALIB_NINTRINSIC + numImages*6
+ numbers[0]*3; i++ )
{
values_read = fscanf(file, "%lf", goodStdDevs + i);
CV_Assert(values_read == 1);
}
}
memset( cameraMatrix, 0, 9*sizeof(cameraMatrix[0]) );
cameraMatrix[0] = cameraMatrix[4] = 807.;
cameraMatrix[2] = (imageSize.width - 1)*0.5;
cameraMatrix[5] = (imageSize.height - 1)*0.5;
cameraMatrix[8] = 1.;
/* Now we can calibrate camera */
calibrate( numImages,
numbers,
cvSize(imageSize),
imagePoints,
objectPoints,
iFixedPoint,
distortion,
cameraMatrix,
transVects,
rotMatrs,
newObjPoints,
stdDevs,
perViewErrors,
calibFlags );
/* ---- Reproject points to the image ---- */
for( currImage = 0; currImage < numImages; currImage++ )
{
int nPoints = etalonSize.width * etalonSize.height;
if( releaseObject )
{
memcpy( objectPoints + currImage * nPoints, newObjPoints,
nPoints * 3 * sizeof(double) );
}
project( nPoints,
objectPoints + currImage * nPoints,
rotMatrs + currImage * 9,
transVects + currImage * 3,
cameraMatrix,
distortion,
reprojectPoints + currImage * nPoints);
}
/* ----- Compute reprojection error ----- */
i = 0;
double dx,dy;
double rx,ry;
double meanDx,meanDy;
double maxDx = 0.0;
double maxDy = 0.0;
meanDx = 0;
meanDy = 0;
for( currImage = 0; currImage < numImages; currImage++ )
{
double imageMeanDx = 0;
double imageMeanDy = 0;
for( currPoint = 0; currPoint < etalonSize.width * etalonSize.height; currPoint++ )
{
rx = reprojectPoints[i].x;
ry = reprojectPoints[i].y;
dx = rx - imagePoints[i].x;
dy = ry - imagePoints[i].y;
meanDx += dx;
meanDy += dy;
imageMeanDx += dx*dx;
imageMeanDy += dy*dy;
dx = fabs(dx);
dy = fabs(dy);
if( dx > maxDx )
maxDx = dx;
if( dy > maxDy )
maxDy = dy;
i++;
}
goodPerViewErrors[currImage] = sqrt( (imageMeanDx + imageMeanDy) /
(etalonSize.width * etalonSize.height));
//only for c-version of test (it does not provides evaluation of perViewErrors
//and returns zeros)
if(perViewErrors[currImage] == 0.0)
perViewErrors[currImage] = goodPerViewErrors[currImage];
}
meanDx /= numImages * etalonSize.width * etalonSize.height;
meanDy /= numImages * etalonSize.width * etalonSize.height;
/* ========= Compare parameters ========= */
/* ----- Compare focal lengths ----- */
code = compare(cameraMatrix+0,&goodFcx,1,0.1,"fx");
if( code < 0 )
goto _exit_;
code = compare(cameraMatrix+4,&goodFcy,1,0.1,"fy");
if( code < 0 )
goto _exit_;
/* ----- Compare principal points ----- */
code = compare(cameraMatrix+2,&goodCx,1,0.1,"cx");
if( code < 0 )
goto _exit_;
code = compare(cameraMatrix+5,&goodCy,1,0.1,"cy");
if( code < 0 )
goto _exit_;
/* ----- Compare distortion ----- */
code = compare(distortion,goodDistortion,4,0.1,"[k1,k2,p1,p2]");
if( code < 0 )
goto _exit_;
/* ----- Compare rot matrixs ----- */
code = compare(rotMatrs,goodRotMatrs, 9*numImages,0.05,"rotation matrices");
if( code < 0 )
goto _exit_;
/* ----- Compare rot matrixs ----- */
code = compare(transVects,goodTransVects, 3*numImages,0.1,"translation vectors");
if( code < 0 )
goto _exit_;
/* ----- Compare refined 3D object points ----- */
if( releaseObject )
{
code = compare(newObjPoints,goodObjPoints, 3*numbers[0],0.1,"refined 3D object points");
if( code < 0 )
goto _exit_;
}
/* ----- Compare per view re-projection errors ----- */
code = compare(perViewErrors,goodPerViewErrors, numImages,0.1,"per view errors vector");
if( code < 0 )
goto _exit_;
/* ----- Compare standard deviations of parameters ----- */
//only for c-version of test (it does not provides evaluation of stdDevs
//and returns zeros)
for ( i = 0; i < CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0]; i++)
{
if(stdDevs[i] == 0.0)
stdDevs[i] = goodStdDevs[i];
}
code = compare(stdDevs,goodStdDevs, CV_CALIB_NINTRINSIC + 6*numImages + 3*numbers[0],.5,
"stdDevs vector");
if( code < 0 )
goto _exit_;
if( maxDx > 1.0 )
{
ts->printf( cvtest::TS::LOG,
"Error in reprojection maxDx=%f > 1.0\n",maxDx);
code = cvtest::TS::FAIL_BAD_ACCURACY; goto _exit_;
}
if( maxDy > 1.0 )
{
ts->printf( cvtest::TS::LOG,
"Error in reprojection maxDy=%f > 1.0\n",maxDy);
code = cvtest::TS::FAIL_BAD_ACCURACY; goto _exit_;
}
progress = update_progress( progress, currTest, numTests, 0 );
cvFree(&imagePoints);
cvFree(&objectPoints);
cvFree(&reprojectPoints);
cvFree(&numbers);
cvFree(&transVects);
cvFree(&rotMatrs);
cvFree(&newObjPoints);
cvFree(&stdDevs);
cvFree(&perViewErrors);
cvFree(&goodTransVects);
cvFree(&goodRotMatrs);
cvFree(&goodObjPoints);
cvFree(&goodPerViewErrors);
cvFree(&goodStdDevs);
fclose(file);
file = 0;
}
_exit_:
if( file )
fclose(file);
if( datafile )
fclose(datafile);
/* Free all allocated memory */
cvFree(&imagePoints);
cvFree(&objectPoints);
cvFree(&reprojectPoints);
cvFree(&numbers);
cvFree(&transVects);
cvFree(&rotMatrs);
cvFree(&goodTransVects);
cvFree(&goodRotMatrs);
if( code < 0 )
ts->set_failed_test_info( code );
}
// --------------------------------- CV_CameraCalibrationTest_C --------------------------------------------
class CV_CameraCalibrationTest_C : public CV_CameraCalibrationTest
{
public:
CV_CameraCalibrationTest_C(){}
protected:
virtual void calibrate( int imageCount, int* pointCounts,
CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, int iFixedPoint,
double* distortionCoeffs, double* cameraMatrix, double* translationVectors,
double* rotationMatrices, double* newObjPoints, double *stdDevs, double* perViewErrors,
int flags );
virtual void project( int pointCount, CvPoint3D64f* objectPoints,
double* rotationMatrix, double* translationVector,
double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints );
};
void CV_CameraCalibrationTest_C::calibrate(int imageCount, int* pointCounts,
CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, int iFixedPoint,
double* distortionCoeffs, double* cameraMatrix, double* translationVectors,
double* rotationMatrices, double* newObjPoints, double *stdDevs, double *perViewErrors,
int flags )
{
int i, total = 0;
for( i = 0; i < imageCount; i++ )
{
perViewErrors[i] = 0.0;
total += pointCounts[i];
}
for( i = 0; i < CV_CALIB_NINTRINSIC + imageCount*6 + pointCounts[0]*3; i++)
{
stdDevs[i] = 0.0;
}
CvMat _objectPoints = cvMat(1, total, CV_64FC3, objectPoints);
CvMat _imagePoints = cvMat(1, total, CV_64FC2, imagePoints);
CvMat _pointCounts = cvMat(1, imageCount, CV_32S, pointCounts);
CvMat _cameraMatrix = cvMat(3, 3, CV_64F, cameraMatrix);
CvMat _distCoeffs = cvMat(4, 1, CV_64F, distortionCoeffs);
CvMat _rotationMatrices = cvMat(imageCount, 9, CV_64F, rotationMatrices);
CvMat _translationVectors = cvMat(imageCount, 3, CV_64F, translationVectors);
CvMat _newObjPoints = cvMat(1, pointCounts[0], CV_64FC3, newObjPoints);
cvCalibrateCamera4(&_objectPoints, &_imagePoints, &_pointCounts, imageSize, iFixedPoint,
&_cameraMatrix, &_distCoeffs, &_rotationMatrices, &_translationVectors,
&_newObjPoints, flags);
}
void CV_CameraCalibrationTest_C::project( int pointCount, CvPoint3D64f* objectPoints,
double* rotationMatrix, double* translationVector,
double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints )
{
CvMat _objectPoints = cvMat(1, pointCount, CV_64FC3, objectPoints);
CvMat _imagePoints = cvMat(1, pointCount, CV_64FC2, imagePoints);
CvMat _cameraMatrix = cvMat(3, 3, CV_64F, cameraMatrix);
CvMat _distCoeffs = cvMat(4, 1, CV_64F, distortion);
CvMat _rotationMatrix = cvMat(3, 3, CV_64F, rotationMatrix);
CvMat _translationVector = cvMat(1, 3, CV_64F, translationVector);
cvProjectPoints2(&_objectPoints, &_rotationMatrix, &_translationVector, &_cameraMatrix, &_distCoeffs, &_imagePoints);
}
// --------------------------------- CV_CameraCalibrationTest_CPP --------------------------------------------
class CV_CameraCalibrationTest_CPP : public CV_CameraCalibrationTest
{
public:
CV_CameraCalibrationTest_CPP(){}
protected:
virtual void calibrate( int imageCount, int* pointCounts,
CvSize imageSize, CvPoint2D64f* imagePoints, CvPoint3D64f* objectPoints, int iFixedPoint,
double* distortionCoeffs, double* cameraMatrix, double* translationVectors,
double* rotationMatrices, double* newObjPoints, double *stdDevs, double* perViewErrors,
int flags );
virtual void project( int pointCount, CvPoint3D64f* objectPoints,
double* rotationMatrix, double* translationVector,
double* cameraMatrix, double* distortion, CvPoint2D64f* imagePoints );
};
void CV_CameraCalibrationTest_CPP::calibrate(int imageCount, int* pointCounts,
CvSize _imageSize, CvPoint2D64f* _imagePoints, CvPoint3D64f* _objectPoints, int iFixedPoint,
double* _distortionCoeffs, double* _cameraMatrix, double* translationVectors,
double* rotationMatrices, double* newObjPoints, double *stdDevs, double *perViewErrors,
int flags )
{
vector<vector<Point3f> > objectPoints( imageCount );
vector<vector<Point2f> > imagePoints( imageCount );
Size imageSize = _imageSize;
Mat cameraMatrix, distCoeffs(1,4,CV_64F,Scalar::all(0));
vector<Mat> rvecs, tvecs;
Mat newObjMat;
Mat stdDevsMatInt, stdDevsMatExt;
Mat stdDevsMatObj;
Mat perViewErrorsMat;
CvPoint3D64f* op = _objectPoints;
CvPoint2D64f* ip = _imagePoints;
vector<vector<Point3f> >::iterator objectPointsIt = objectPoints.begin();
vector<vector<Point2f> >::iterator imagePointsIt = imagePoints.begin();
for( int i = 0; i < imageCount; ++objectPointsIt, ++imagePointsIt, i++ )
{
int num = pointCounts[i];
objectPointsIt->resize( num );
imagePointsIt->resize( num );
vector<Point3f>::iterator oIt = objectPointsIt->begin();
vector<Point2f>::iterator iIt = imagePointsIt->begin();
for( int j = 0; j < num; ++oIt, ++iIt, j++, op++, ip++)
{
oIt->x = (float)op->x, oIt->y = (float)op->y, oIt->z = (float)op->z;
iIt->x = (float)ip->x, iIt->y = (float)ip->y;
}
}
for( int i = CV_CALIB_NINTRINSIC + imageCount*6; i < CV_CALIB_NINTRINSIC + imageCount*6
+ pointCounts[0]*3; i++)
{
stdDevs[i] = 0.0;
}
calibrateCameraRO( objectPoints,
imagePoints,
imageSize,
iFixedPoint,
cameraMatrix,
distCoeffs,
rvecs,
tvecs,
newObjMat,
stdDevsMatInt,
stdDevsMatExt,
stdDevsMatObj,
perViewErrorsMat,
flags );
bool releaseObject = iFixedPoint > 0 && iFixedPoint < pointCounts[0] - 1;
if( releaseObject )
{
newObjMat.convertTo( newObjMat, CV_64F );
assert( newObjMat.total() * newObjMat.channels() == static_cast<size_t>(3*pointCounts[0]) );
memcpy( newObjPoints, newObjMat.ptr(), 3*pointCounts[0]*sizeof(double) );
}
assert( stdDevsMatInt.type() == CV_64F );
assert( stdDevsMatInt.total() == static_cast<size_t>(CV_CALIB_NINTRINSIC) );
memcpy( stdDevs, stdDevsMatInt.ptr(), CV_CALIB_NINTRINSIC*sizeof(double) );
assert( stdDevsMatExt.type() == CV_64F );
assert( stdDevsMatExt.total() == static_cast<size_t>(6*imageCount) );
memcpy( stdDevs + CV_CALIB_NINTRINSIC, stdDevsMatExt.ptr(), 6*imageCount*sizeof(double) );
if( releaseObject )
{
assert( stdDevsMatObj.type() == CV_64F );
assert( stdDevsMatObj.total() == static_cast<size_t>(3*pointCounts[0]) );
memcpy( stdDevs + CV_CALIB_NINTRINSIC + 6*imageCount, stdDevsMatObj.ptr(),
3*pointCounts[0]*sizeof(double) );
}
assert( perViewErrorsMat.type() == CV_64F);
assert( perViewErrorsMat.total() == static_cast<size_t>(imageCount) );
memcpy( perViewErrors, perViewErrorsMat.ptr(), imageCount*sizeof(double) );
assert( cameraMatrix.type() == CV_64FC1 );
memcpy( _cameraMatrix, cameraMatrix.ptr(), 9*sizeof(double) );
assert( cameraMatrix.type() == CV_64FC1 );
memcpy( _distortionCoeffs, distCoeffs.ptr(), 4*sizeof(double) );
vector<Mat>::iterator rvecsIt = rvecs.begin();
vector<Mat>::iterator tvecsIt = tvecs.begin();
double *rm = rotationMatrices,
*tm = translationVectors;
assert( rvecsIt->type() == CV_64FC1 );
assert( tvecsIt->type() == CV_64FC1 );
for( int i = 0; i < imageCount; ++rvecsIt, ++tvecsIt, i++, rm+=9, tm+=3 )
{
Mat r9( 3, 3, CV_64FC1 );
cvtest::Rodrigues( *rvecsIt, r9 );
memcpy( rm, r9.ptr(), 9*sizeof(double) );
memcpy( tm, tvecsIt->ptr(), 3*sizeof(double) );
}
}
void CV_CameraCalibrationTest_CPP::project( int pointCount, CvPoint3D64f* _objectPoints,
double* rotationMatrix, double* translationVector,
double* _cameraMatrix, double* distortion, CvPoint2D64f* _imagePoints )
{
Mat objectPoints( pointCount, 3, CV_64FC1, _objectPoints );
Mat rmat( 3, 3, CV_64FC1, rotationMatrix ),
rvec( 1, 3, CV_64FC1 ),
tvec( 1, 3, CV_64FC1, translationVector );
Mat cameraMatrix( 3, 3, CV_64FC1, _cameraMatrix );
Mat distCoeffs( 1, 4, CV_64FC1, distortion );
vector<Point2f> imagePoints;
cvtest::Rodrigues( rmat, rvec );
objectPoints.convertTo( objectPoints, CV_32FC1 );
projectPoints( objectPoints, rvec, tvec,
cameraMatrix, distCoeffs, imagePoints );
vector<Point2f>::const_iterator it = imagePoints.begin();
for( int i = 0; it != imagePoints.end(); ++it, i++ )
{
_imagePoints[i] = cvPoint2D64f( it->x, it->y );
}
}
//----------------------------------------- CV_CalibrationMatrixValuesTest --------------------------------
class CV_CalibrationMatrixValuesTest : public cvtest::BaseTest
{
public:
CV_CalibrationMatrixValuesTest() {}
protected:
void run(int);
virtual void calibMatrixValues( const Mat& cameraMatrix, Size imageSize,
double apertureWidth, double apertureHeight, double& fovx, double& fovy, double& focalLength,
Point2d& principalPoint, double& aspectRatio ) = 0;
};
void CV_CalibrationMatrixValuesTest::run(int)
{
int code = cvtest::TS::OK;
const double fcMinVal = 1e-5;
const double fcMaxVal = 1000;
const double apertureMaxVal = 0.01;
RNG rng = ts->get_rng();
double fx, fy, cx, cy, nx, ny;
Mat cameraMatrix( 3, 3, CV_64FC1 );
cameraMatrix.setTo( Scalar(0) );
fx = cameraMatrix.at<double>(0,0) = rng.uniform( fcMinVal, fcMaxVal );
fy = cameraMatrix.at<double>(1,1) = rng.uniform( fcMinVal, fcMaxVal );
cx = cameraMatrix.at<double>(0,2) = rng.uniform( fcMinVal, fcMaxVal );
cy = cameraMatrix.at<double>(1,2) = rng.uniform( fcMinVal, fcMaxVal );
cameraMatrix.at<double>(2,2) = 1;
Size imageSize( 600, 400 );
double apertureWidth = (double)rng * apertureMaxVal,
apertureHeight = (double)rng * apertureMaxVal;
double fovx, fovy, focalLength, aspectRatio,
goodFovx, goodFovy, goodFocalLength, goodAspectRatio;
Point2d principalPoint, goodPrincipalPoint;
calibMatrixValues( cameraMatrix, imageSize, apertureWidth, apertureHeight,
fovx, fovy, focalLength, principalPoint, aspectRatio );
// calculate calibration matrix values
goodAspectRatio = fy / fx;
if( apertureWidth != 0.0 && apertureHeight != 0.0 )
{
nx = imageSize.width / apertureWidth;
ny = imageSize.height / apertureHeight;
}
else
{
nx = 1.0;
ny = goodAspectRatio;
}
goodFovx = (atan2(cx, fx) + atan2(imageSize.width - cx, fx)) * 180.0 / CV_PI;
goodFovy = (atan2(cy, fy) + atan2(imageSize.height - cy, fy)) * 180.0 / CV_PI;
goodFocalLength = fx / nx;
goodPrincipalPoint.x = cx / nx;
goodPrincipalPoint.y = cy / ny;
// check results
if( fabs(fovx - goodFovx) > FLT_EPSILON )
{
ts->printf( cvtest::TS::LOG, "bad fovx (real=%f, good = %f\n", fovx, goodFovx );
code = cvtest::TS::FAIL_BAD_ACCURACY;
goto _exit_;
}
if( fabs(fovy - goodFovy) > FLT_EPSILON )
{
ts->printf( cvtest::TS::LOG, "bad fovy (real=%f, good = %f\n", fovy, goodFovy );
code = cvtest::TS::FAIL_BAD_ACCURACY;
goto _exit_;
}
if( fabs(focalLength - goodFocalLength) > FLT_EPSILON )
{
ts->printf( cvtest::TS::LOG, "bad focalLength (real=%f, good = %f\n", focalLength, goodFocalLength );
code = cvtest::TS::FAIL_BAD_ACCURACY;
goto _exit_;
}
if( fabs(aspectRatio - goodAspectRatio) > FLT_EPSILON )
{
ts->printf( cvtest::TS::LOG, "bad aspectRatio (real=%f, good = %f\n", aspectRatio, goodAspectRatio );
code = cvtest::TS::FAIL_BAD_ACCURACY;
goto _exit_;
}
if( cv::norm(principalPoint - goodPrincipalPoint) > FLT_EPSILON ) // Point2d
{
ts->printf( cvtest::TS::LOG, "bad principalPoint\n" );
code = cvtest::TS::FAIL_BAD_ACCURACY;
goto _exit_;
}
_exit_:
RNG& _rng = ts->get_rng();
_rng = rng;
ts->set_failed_test_info( code );
}
//----------------------------------------- CV_CalibrationMatrixValuesTest_C --------------------------------
class CV_CalibrationMatrixValuesTest_C : public CV_CalibrationMatrixValuesTest
{
public:
CV_CalibrationMatrixValuesTest_C(){}
protected:
virtual void calibMatrixValues( const Mat& cameraMatrix, Size imageSize,
double apertureWidth, double apertureHeight, double& fovx, double& fovy, double& focalLength,
Point2d& principalPoint, double& aspectRatio );
};
void CV_CalibrationMatrixValuesTest_C::calibMatrixValues( const Mat& _cameraMatrix, Size imageSize,
double apertureWidth, double apertureHeight,
double& fovx, double& fovy, double& focalLength,
Point2d& principalPoint, double& aspectRatio )
{
CvMat cameraMatrix = cvMat(_cameraMatrix);
CvPoint2D64f pp = {0, 0};
cvCalibrationMatrixValues( &cameraMatrix, cvSize(imageSize), apertureWidth, apertureHeight,
&fovx, &fovy, &focalLength, &pp, &aspectRatio );
principalPoint.x = pp.x;
principalPoint.y = pp.y;
}
//----------------------------------------- CV_CalibrationMatrixValuesTest_CPP --------------------------------
class CV_CalibrationMatrixValuesTest_CPP : public CV_CalibrationMatrixValuesTest
{
public:
CV_CalibrationMatrixValuesTest_CPP() {}
protected:
virtual void calibMatrixValues( const Mat& cameraMatrix, Size imageSize,
double apertureWidth, double apertureHeight, double& fovx, double& fovy, double& focalLength,
Point2d& principalPoint, double& aspectRatio );
};
void CV_CalibrationMatrixValuesTest_CPP::calibMatrixValues( const Mat& cameraMatrix, Size imageSize,
double apertureWidth, double apertureHeight,
double& fovx, double& fovy, double& focalLength,
Point2d& principalPoint, double& aspectRatio )
{
calibrationMatrixValues( cameraMatrix, imageSize, apertureWidth, apertureHeight,
fovx, fovy, focalLength, principalPoint, aspectRatio );
}
//----------------------------------------- CV_ProjectPointsTest --------------------------------
void calcdfdx( const vector<vector<Point2f> >& leftF, const vector<vector<Point2f> >& rightF, double eps, Mat& dfdx )
{
const int fdim = 2;
CV_Assert( !leftF.empty() && !rightF.empty() && !leftF[0].empty() && !rightF[0].empty() );
CV_Assert( leftF[0].size() == rightF[0].size() );
CV_Assert( fabs(eps) > std::numeric_limits<double>::epsilon() );
int fcount = (int)leftF[0].size(), xdim = (int)leftF.size();
dfdx.create( fcount*fdim, xdim, CV_64FC1 );
vector<vector<Point2f> >::const_iterator arrLeftIt = leftF.begin();
vector<vector<Point2f> >::const_iterator arrRightIt = rightF.begin();
for( int xi = 0; xi < xdim; xi++, ++arrLeftIt, ++arrRightIt )
{
CV_Assert( (int)arrLeftIt->size() == fcount );
CV_Assert( (int)arrRightIt->size() == fcount );
vector<Point2f>::const_iterator lIt = arrLeftIt->begin();
vector<Point2f>::const_iterator rIt = arrRightIt->begin();
for( int fi = 0; fi < dfdx.rows; fi+=fdim, ++lIt, ++rIt )
{
dfdx.at<double>(fi, xi ) = 0.5 * ((double)(rIt->x - lIt->x)) / eps;
dfdx.at<double>(fi+1, xi ) = 0.5 * ((double)(rIt->y - lIt->y)) / eps;
}
}
}
class CV_ProjectPointsTest : public cvtest::BaseTest
{
public:
CV_ProjectPointsTest() {}
protected:
void run(int);
virtual void project( const Mat& objectPoints,
const Mat& rvec, const Mat& tvec,
const Mat& cameraMatrix,
const Mat& distCoeffs,
vector<Point2f>& imagePoints,
Mat& dpdrot, Mat& dpdt, Mat& dpdf,
Mat& dpdc, Mat& dpddist,
double aspectRatio=0 ) = 0;
};
void CV_ProjectPointsTest::run(int)
{
//typedef float matType;
int code = cvtest::TS::OK;
const int pointCount = 100;
const float zMinVal = 10.0f, zMaxVal = 100.0f,
rMinVal = -0.3f, rMaxVal = 0.3f,
tMinVal = -2.0f, tMaxVal = 2.0f;
const float imgPointErr = 1e-3f,
dEps = 1e-3f;
double err;
Size imgSize( 600, 800 );
Mat_<float> objPoints( pointCount, 3), rvec( 1, 3), rmat, tvec( 1, 3 ), cameraMatrix( 3, 3 ), distCoeffs( 1, 4 ),
leftRvec, rightRvec, leftTvec, rightTvec, leftCameraMatrix, rightCameraMatrix, leftDistCoeffs, rightDistCoeffs;
RNG rng = ts->get_rng();
// generate data
cameraMatrix << 300.f, 0.f, imgSize.width/2.f,
0.f, 300.f, imgSize.height/2.f,
0.f, 0.f, 1.f;
distCoeffs << 0.1, 0.01, 0.001, 0.001;
rvec(0,0) = rng.uniform( rMinVal, rMaxVal );
rvec(0,1) = rng.uniform( rMinVal, rMaxVal );
rvec(0,2) = rng.uniform( rMinVal, rMaxVal );
rmat = cv::Mat_<float>::zeros(3, 3);
cvtest::Rodrigues( rvec, rmat );
tvec(0,0) = rng.uniform( tMinVal, tMaxVal );
tvec(0,1) = rng.uniform( tMinVal, tMaxVal );
tvec(0,2) = rng.uniform( tMinVal, tMaxVal );
for( int y = 0; y < objPoints.rows; y++ )
{
Mat point(1, 3, CV_32FC1, objPoints.ptr(y) );
float z = rng.uniform( zMinVal, zMaxVal );
point.at<float>(0,2) = z;
point.at<float>(0,0) = (rng.uniform(2.f,(float)(imgSize.width-2)) - cameraMatrix(0,2)) / cameraMatrix(0,0) * z;
point.at<float>(0,1) = (rng.uniform(2.f,(float)(imgSize.height-2)) - cameraMatrix(1,2)) / cameraMatrix(1,1) * z;
point = (point - tvec) * rmat;
}
vector<Point2f> imgPoints;
vector<vector<Point2f> > leftImgPoints;
vector<vector<Point2f> > rightImgPoints;
Mat dpdrot, dpdt, dpdf, dpdc, dpddist,
valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist;
project( objPoints, rvec, tvec, cameraMatrix, distCoeffs,
imgPoints, dpdrot, dpdt, dpdf, dpdc, dpddist, 0 );
// calculate and check image points
assert( (int)imgPoints.size() == pointCount );
vector<Point2f>::const_iterator it = imgPoints.begin();
for( int i = 0; i < pointCount; i++, ++it )
{
Point3d p( objPoints(i,0), objPoints(i,1), objPoints(i,2) );
double z = p.x*rmat(2,0) + p.y*rmat(2,1) + p.z*rmat(2,2) + tvec(0,2),
x = (p.x*rmat(0,0) + p.y*rmat(0,1) + p.z*rmat(0,2) + tvec(0,0)) / z,
y = (p.x*rmat(1,0) + p.y*rmat(1,1) + p.z*rmat(1,2) + tvec(0,1)) / z,
r2 = x*x + y*y,
r4 = r2*r2;
Point2f validImgPoint;
double a1 = 2*x*y,
a2 = r2 + 2*x*x,
a3 = r2 + 2*y*y,
cdist = 1+distCoeffs(0,0)*r2+distCoeffs(0,1)*r4;
validImgPoint.x = static_cast<float>((double)cameraMatrix(0,0)*(x*cdist + (double)distCoeffs(0,2)*a1 + (double)distCoeffs(0,3)*a2)
+ (double)cameraMatrix(0,2));
validImgPoint.y = static_cast<float>((double)cameraMatrix(1,1)*(y*cdist + (double)distCoeffs(0,2)*a3 + distCoeffs(0,3)*a1)
+ (double)cameraMatrix(1,2));
if( fabs(it->x - validImgPoint.x) > imgPointErr ||
fabs(it->y - validImgPoint.y) > imgPointErr )
{
ts->printf( cvtest::TS::LOG, "bad image point\n" );
code = cvtest::TS::FAIL_BAD_ACCURACY;
goto _exit_;
}
}
// check derivatives
// 1. rotation
leftImgPoints.resize(3);
rightImgPoints.resize(3);
for( int i = 0; i < 3; i++ )
{
rvec.copyTo( leftRvec ); leftRvec(0,i) -= dEps;
project( objPoints, leftRvec, tvec, cameraMatrix, distCoeffs,
leftImgPoints[i], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
rvec.copyTo( rightRvec ); rightRvec(0,i) += dEps;
project( objPoints, rightRvec, tvec, cameraMatrix, distCoeffs,
rightImgPoints[i], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
}
calcdfdx( leftImgPoints, rightImgPoints, dEps, valDpdrot );
err = cvtest::norm( dpdrot, valDpdrot, NORM_INF );
if( err > 3 )
{
ts->printf( cvtest::TS::LOG, "bad dpdrot: too big difference = %g\n", err );
code = cvtest::TS::FAIL_BAD_ACCURACY;
}
// 2. translation
for( int i = 0; i < 3; i++ )
{
tvec.copyTo( leftTvec ); leftTvec(0,i) -= dEps;
project( objPoints, rvec, leftTvec, cameraMatrix, distCoeffs,
leftImgPoints[i], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
tvec.copyTo( rightTvec ); rightTvec(0,i) += dEps;
project( objPoints, rvec, rightTvec, cameraMatrix, distCoeffs,
rightImgPoints[i], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
}
calcdfdx( leftImgPoints, rightImgPoints, dEps, valDpdt );
if( cvtest::norm( dpdt, valDpdt, NORM_INF ) > 0.2 )
{
ts->printf( cvtest::TS::LOG, "bad dpdtvec\n" );
code = cvtest::TS::FAIL_BAD_ACCURACY;
}
// 3. camera matrix
// 3.1. focus
leftImgPoints.resize(2);
rightImgPoints.resize(2);
cameraMatrix.copyTo( leftCameraMatrix ); leftCameraMatrix(0,0) -= dEps;
project( objPoints, rvec, tvec, leftCameraMatrix, distCoeffs,
leftImgPoints[0], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
cameraMatrix.copyTo( leftCameraMatrix ); leftCameraMatrix(1,1) -= dEps;
project( objPoints, rvec, tvec, leftCameraMatrix, distCoeffs,
leftImgPoints[1], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
cameraMatrix.copyTo( rightCameraMatrix ); rightCameraMatrix(0,0) += dEps;
project( objPoints, rvec, tvec, rightCameraMatrix, distCoeffs,
rightImgPoints[0], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
cameraMatrix.copyTo( rightCameraMatrix ); rightCameraMatrix(1,1) += dEps;
project( objPoints, rvec, tvec, rightCameraMatrix, distCoeffs,
rightImgPoints[1], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
calcdfdx( leftImgPoints, rightImgPoints, dEps, valDpdf );
if ( cvtest::norm( dpdf, valDpdf, NORM_L2 ) > 0.2 )
{
ts->printf( cvtest::TS::LOG, "bad dpdf\n" );
code = cvtest::TS::FAIL_BAD_ACCURACY;
}
// 3.2. principal point
leftImgPoints.resize(2);
rightImgPoints.resize(2);
cameraMatrix.copyTo( leftCameraMatrix ); leftCameraMatrix(0,2) -= dEps;
project( objPoints, rvec, tvec, leftCameraMatrix, distCoeffs,
leftImgPoints[0], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
cameraMatrix.copyTo( leftCameraMatrix ); leftCameraMatrix(1,2) -= dEps;
project( objPoints, rvec, tvec, leftCameraMatrix, distCoeffs,
leftImgPoints[1], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
cameraMatrix.copyTo( rightCameraMatrix ); rightCameraMatrix(0,2) += dEps;
project( objPoints, rvec, tvec, rightCameraMatrix, distCoeffs,
rightImgPoints[0], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
cameraMatrix.copyTo( rightCameraMatrix ); rightCameraMatrix(1,2) += dEps;
project( objPoints, rvec, tvec, rightCameraMatrix, distCoeffs,
rightImgPoints[1], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
calcdfdx( leftImgPoints, rightImgPoints, dEps, valDpdc );
if ( cvtest::norm( dpdc, valDpdc, NORM_L2 ) > 0.2 )
{
ts->printf( cvtest::TS::LOG, "bad dpdc\n" );
code = cvtest::TS::FAIL_BAD_ACCURACY;
}
// 4. distortion
leftImgPoints.resize(distCoeffs.cols);
rightImgPoints.resize(distCoeffs.cols);
for( int i = 0; i < distCoeffs.cols; i++ )
{
distCoeffs.copyTo( leftDistCoeffs ); leftDistCoeffs(0,i) -= dEps;
project( objPoints, rvec, tvec, cameraMatrix, leftDistCoeffs,
leftImgPoints[i], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
distCoeffs.copyTo( rightDistCoeffs ); rightDistCoeffs(0,i) += dEps;
project( objPoints, rvec, tvec, cameraMatrix, rightDistCoeffs,
rightImgPoints[i], valDpdrot, valDpdt, valDpdf, valDpdc, valDpddist, 0 );
}
calcdfdx( leftImgPoints, rightImgPoints, dEps, valDpddist );
if( cvtest::norm( dpddist, valDpddist, NORM_L2 ) > 0.3 )
{
ts->printf( cvtest::TS::LOG, "bad dpddist\n" );
code = cvtest::TS::FAIL_BAD_ACCURACY;
}
_exit_:
RNG& _rng = ts->get_rng();
_rng = rng;
ts->set_failed_test_info( code );
}
//----------------------------------------- CV_ProjectPointsTest_C --------------------------------
class CV_ProjectPointsTest_C : public CV_ProjectPointsTest
{
public:
CV_ProjectPointsTest_C() {}
protected:
virtual void project( const Mat& objectPoints,
const Mat& rvec, const Mat& tvec,
const Mat& cameraMatrix,
const Mat& distCoeffs,
vector<Point2f>& imagePoints,
Mat& dpdrot, Mat& dpdt, Mat& dpdf,
Mat& dpdc, Mat& dpddist,
double aspectRatio=0 );
};
void CV_ProjectPointsTest_C::project( const Mat& opoints, const Mat& rvec, const Mat& tvec,
const Mat& cameraMatrix, const Mat& distCoeffs, vector<Point2f>& ipoints,
Mat& dpdrot, Mat& dpdt, Mat& dpdf, Mat& dpdc, Mat& dpddist, double aspectRatio)
{
int npoints = opoints.cols*opoints.rows*opoints.channels()/3;
ipoints.resize(npoints);
dpdrot.create(npoints*2, 3, CV_64F);
dpdt.create(npoints*2, 3, CV_64F);
dpdf.create(npoints*2, 2, CV_64F);
dpdc.create(npoints*2, 2, CV_64F);
dpddist.create(npoints*2, distCoeffs.rows + distCoeffs.cols - 1, CV_64F);
Mat imagePoints(ipoints);
CvMat _objectPoints = cvMat(opoints), _imagePoints = cvMat(imagePoints);
CvMat _rvec = cvMat(rvec), _tvec = cvMat(tvec), _cameraMatrix = cvMat(cameraMatrix), _distCoeffs = cvMat(distCoeffs);
CvMat _dpdrot = cvMat(dpdrot), _dpdt = cvMat(dpdt), _dpdf = cvMat(dpdf), _dpdc = cvMat(dpdc), _dpddist = cvMat(dpddist);
cvProjectPoints2( &_objectPoints, &_rvec, &_tvec, &_cameraMatrix, &_distCoeffs,
&_imagePoints, &_dpdrot, &_dpdt, &_dpdf, &_dpdc, &_dpddist, aspectRatio );
}
//----------------------------------------- CV_ProjectPointsTest_CPP --------------------------------
class CV_ProjectPointsTest_CPP : public CV_ProjectPointsTest
{
public:
CV_ProjectPointsTest_CPP() {}
protected:
virtual void project( const Mat& objectPoints,
const Mat& rvec, const Mat& tvec,
const Mat& cameraMatrix,
const Mat& distCoeffs,
vector<Point2f>& imagePoints,
Mat& dpdrot, Mat& dpdt, Mat& dpdf,
Mat& dpdc, Mat& dpddist,
double aspectRatio=0 );
};
void CV_ProjectPointsTest_CPP::project( const Mat& objectPoints, const Mat& rvec, const Mat& tvec,
const Mat& cameraMatrix, const Mat& distCoeffs, vector<Point2f>& imagePoints,
Mat& dpdrot, Mat& dpdt, Mat& dpdf, Mat& dpdc, Mat& dpddist, double aspectRatio)
{
Mat J;
projectPoints( objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePoints, J, aspectRatio);
J.colRange(0, 3).copyTo(dpdrot);
J.colRange(3, 6).copyTo(dpdt);
J.colRange(6, 8).copyTo(dpdf);
J.colRange(8, 10).copyTo(dpdc);
J.colRange(10, J.cols).copyTo(dpddist);
}
///////////////////////////////// Stereo Calibration /////////////////////////////////////
class CV_StereoCalibrationCornerTest : public cvtest::BaseTest
{
public:
CV_StereoCalibrationCornerTest();
~CV_StereoCalibrationCornerTest();
void clear();
protected:
void run(int);
};
CV_StereoCalibrationCornerTest::CV_StereoCalibrationCornerTest()
{
}
CV_StereoCalibrationCornerTest::~CV_StereoCalibrationCornerTest()
{
clear();
}
void CV_StereoCalibrationCornerTest::clear()
{
cvtest::BaseTest::clear();
}
static bool resizeCameraMatrix(const Mat &in_cm, Mat &dst_cm, double scale)
{
if (in_cm.empty() || in_cm.cols != 3 || in_cm.rows != 3 || in_cm.type() != CV_64FC1)
return false;
dst_cm = in_cm * scale;
dst_cm.at<double>(2, 2) = 1.0;
return true;
}
// see https://github.com/opencv/opencv/pull/6836 for details
void CV_StereoCalibrationCornerTest::run(int)
{
const Matx33d M1(906.7857732303256, 0.0, 1026.456125870669,
0.0, 906.7857732303256, 540.0531577669913,
0.0, 0.0, 1.0);
const Matx33d M2(906.782205162265, 0.0, 1014.619997352785,
0.0, 906.782205162265, 561.9990018887295,
0.0, 0.0, 1.0);
const Matx<double, 5, 1> D1(0.0064836857220181504, 0.033880363848984636, 0.0, 0.0, -0.042996356352306114);
const Matx<double, 5, 1> D2(0.023754068600491646, -0.02364619610835259, 0.0, 0.0, 0.0015014971456262652);
const Size imageSize(2048, 1088);
const double scale = 0.25;
const Matx33d Rot(0.999788461750194, -0.015696495349844446, -0.013291041528534329,
0.015233019205877604, 0.999296086451901, -0.034282455101525826,
0.01381980018141639, 0.03407274036010432, 0.9993238021218641);
const Matx31d T(-1.552005597952028, 0.0019508251875105093, -0.023335501616116062);
// generate camera matrices for resized image rectification.
Mat srcM1(M1), srcM2(M2);
Mat rszM1, rszM2;
resizeCameraMatrix(srcM1, rszM1, scale);
resizeCameraMatrix(srcM2, rszM2, scale);
Size rszImageSize(cvRound(scale * imageSize.width), cvRound(scale * imageSize.height));
Size srcImageSize = imageSize;
// apply stereoRectify
Mat srcR[2], srcP[2], srcQ;
Mat rszR[2], rszP[2], rszQ;
stereoRectify(srcM1, D1, srcM2, D2, srcImageSize, Rot, T,
srcR[0], srcR[1], srcP[0], srcP[1], srcQ,
CALIB_ZERO_DISPARITY, 0);
stereoRectify(rszM1, D1, rszM2, D2, rszImageSize, Rot, T,
rszR[0], rszR[1], rszP[0], rszP[1], rszQ,
CALIB_ZERO_DISPARITY, 0);
// generate remap maps
Mat srcRmap[2], rszRmap[2];
initUndistortRectifyMap(srcM1, D1, srcR[0], srcP[0], srcImageSize, CV_32FC2, srcRmap[0], srcRmap[1]);
initUndistortRectifyMap(rszM1, D1, rszR[0], rszP[0], rszImageSize, CV_32FC2, rszRmap[0], rszRmap[1]);
// generate source image
// it's an artificial pattern with white rect in the center
Mat image(imageSize, CV_8UC3);
image.setTo(0);
image(cv::Rect(imageSize.width / 3, imageSize.height / 3, imageSize.width / 3, imageSize.height / 3)).setTo(255);
// perform remap-resize
Mat src_result;
remap(image, src_result, srcRmap[0], srcRmap[1], INTER_LINEAR);
resize(src_result, src_result, Size(), scale, scale, INTER_LINEAR_EXACT);
// perform resize-remap
Mat rsz_result;
resize(image, rsz_result, Size(), scale, scale, INTER_LINEAR_EXACT);
remap(rsz_result, rsz_result, rszRmap[0], rszRmap[1], INTER_LINEAR);
// modifying the camera matrix with resizeCameraMatrix must yield the same
// result as calibrating the downscaled images
int cnz = countNonZero((cv::Mat(src_result - rsz_result) != 0)(
cv::Rect(src_result.cols / 3, src_result.rows / 3,
(int)(src_result.cols / 3.1), int(src_result.rows / 3.1))));
if (cnz)
{
ts->printf( cvtest::TS::LOG, "The camera matrix is wrong for downscaled image\n");
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
}
}
class CV_StereoCalibrationTest : public cvtest::BaseTest
{
public:
CV_StereoCalibrationTest();
~CV_StereoCalibrationTest();
void clear();
protected:
bool checkPandROI( int test_case_idx,
const Mat& M, const Mat& D, const Mat& R,
const Mat& P, Size imgsize, Rect roi );
// covers of tested functions
virtual double calibrateStereoCamera( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints1,
const vector<vector<Point2f> >& imagePoints2,
Mat& cameraMatrix1, Mat& distCoeffs1,
Mat& cameraMatrix2, Mat& distCoeffs2,
Size imageSize, Mat& R, Mat& T,
Mat& E, Mat& F, TermCriteria criteria, int flags ) = 0;
virtual void rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,
const Mat& cameraMatrix2, const Mat& distCoeffs2,
Size imageSize, const Mat& R, const Mat& T,
Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q,
double alpha, Size newImageSize,
Rect* validPixROI1, Rect* validPixROI2, int flags ) = 0;
virtual bool rectifyUncalibrated( const Mat& points1,
const Mat& points2, const Mat& F, Size imgSize,
Mat& H1, Mat& H2, double threshold=5 ) = 0;
virtual void triangulate( const Mat& P1, const Mat& P2,
const Mat &points1, const Mat &points2,
Mat &points4D ) = 0;
virtual void correct( const Mat& F,
const Mat &points1, const Mat &points2,
Mat &newPoints1, Mat &newPoints2 ) = 0;
void run(int);
};
CV_StereoCalibrationTest::CV_StereoCalibrationTest()
{
}
CV_StereoCalibrationTest::~CV_StereoCalibrationTest()
{
clear();
}
void CV_StereoCalibrationTest::clear()
{
cvtest::BaseTest::clear();
}
bool CV_StereoCalibrationTest::checkPandROI( int test_case_idx, const Mat& M, const Mat& D, const Mat& R,
const Mat& P, Size imgsize, Rect roi )
{
const double eps = 0.05;
const int N = 21;
int x, y, k;
vector<Point2f> pts, upts;
// step 1. check that all the original points belong to the destination image
for( y = 0; y < N; y++ )
for( x = 0; x < N; x++ )
pts.push_back(Point2f((float)x*imgsize.width/(N-1), (float)y*imgsize.height/(N-1)));
undistortPoints(Mat(pts), upts, M, D, R, P );
for( k = 0; k < N*N; k++ )
if( upts[k].x < -imgsize.width*eps || upts[k].x > imgsize.width*(1+eps) ||
upts[k].y < -imgsize.height*eps || upts[k].y > imgsize.height*(1+eps) )
{
ts->printf(cvtest::TS::LOG, "Test #%d. The point (%g, %g) was mapped to (%g, %g) which is out of image\n",
test_case_idx, pts[k].x, pts[k].y, upts[k].x, upts[k].y);
return false;
}
// step 2. check that all the points inside ROI belong to the original source image
Mat temp(imgsize, CV_8U), utemp, map1, map2;
temp = Scalar::all(1);
initUndistortRectifyMap(M, D, R, P, imgsize, CV_16SC2, map1, map2);
remap(temp, utemp, map1, map2, INTER_LINEAR);
if(roi.x < 0 || roi.y < 0 || roi.x + roi.width > imgsize.width || roi.y + roi.height > imgsize.height)
{
ts->printf(cvtest::TS::LOG, "Test #%d. The ROI=(%d, %d, %d, %d) is outside of the imge rectangle\n",
test_case_idx, roi.x, roi.y, roi.width, roi.height);
return false;
}
double s = sum(utemp(roi))[0];
if( s > roi.area() || roi.area() - s > roi.area()*(1-eps) )
{
ts->printf(cvtest::TS::LOG, "Test #%d. The ratio of black pixels inside the valid ROI (~%g%%) is too large\n",
test_case_idx, s*100./roi.area());
return false;
}
return true;
}
void CV_StereoCalibrationTest::run( int )
{
const int ntests = 1;
const double maxReprojErr = 2;
const double maxScanlineDistErr_c = 3;
const double maxScanlineDistErr_uc = 4;
FILE* f = 0;
for(int testcase = 1; testcase <= ntests; testcase++)
{
cv::String filepath;
char buf[1000];
filepath = cv::format("%scv/stereo/case%d/stereo_calib.txt", ts->get_data_path().c_str(), testcase );
f = fopen(filepath.c_str(), "rt");
Size patternSize;
vector<string> imglist;
if( !f || !fgets(buf, sizeof(buf)-3, f) || sscanf(buf, "%d%d", &patternSize.width, &patternSize.height) != 2 )
{
ts->printf( cvtest::TS::LOG, "The file %s can not be opened or has invalid content\n", filepath.c_str() );
ts->set_failed_test_info( f ? cvtest::TS::FAIL_INVALID_TEST_DATA : cvtest::TS::FAIL_MISSING_TEST_DATA );
if (f)
fclose(f);
return;
}
for(;;)
{
if( !fgets( buf, sizeof(buf)-3, f ))
break;
size_t len = strlen(buf);
while( len > 0 && isspace(buf[len-1]))
buf[--len] = '\0';
if( buf[0] == '#')
continue;
filepath = cv::format("%scv/stereo/case%d/%s", ts->get_data_path().c_str(), testcase, buf );
imglist.push_back(string(filepath));
}
fclose(f);
if( imglist.size() == 0 || imglist.size() % 2 != 0 )
{
ts->printf( cvtest::TS::LOG, "The number of images is 0 or an odd number in the case #%d\n", testcase );
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_TEST_DATA );
return;
}
int nframes = (int)(imglist.size()/2);
int npoints = patternSize.width*patternSize.height;
vector<vector<Point3f> > objpt(nframes);
vector<vector<Point2f> > imgpt1(nframes);
vector<vector<Point2f> > imgpt2(nframes);
Size imgsize;
int total = 0;
for( int i = 0; i < nframes; i++ )
{
Mat left = imread(imglist[i*2]);
Mat right = imread(imglist[i*2+1]);
if(left.empty() || right.empty())
{
ts->printf( cvtest::TS::LOG, "Can not load images %s and %s, testcase %d\n",
imglist[i*2].c_str(), imglist[i*2+1].c_str(), testcase );
ts->set_failed_test_info( cvtest::TS::FAIL_MISSING_TEST_DATA );
return;
}
imgsize = left.size();
bool found1 = findChessboardCorners(left, patternSize, imgpt1[i]);
bool found2 = findChessboardCorners(right, patternSize, imgpt2[i]);
if(!found1 || !found2)
{
ts->printf( cvtest::TS::LOG, "The function could not detect boards (%d x %d) on the images %s and %s, testcase %d\n",
patternSize.width, patternSize.height,
imglist[i*2].c_str(), imglist[i*2+1].c_str(), testcase );
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
return;
}
total += (int)imgpt1[i].size();
for( int j = 0; j < npoints; j++ )
objpt[i].push_back(Point3f((float)(j%patternSize.width), (float)(j/patternSize.width), 0.f));
}
// rectify (calibrated)
Mat M1 = Mat::eye(3,3,CV_64F), M2 = Mat::eye(3,3,CV_64F), D1(5,1,CV_64F), D2(5,1,CV_64F), R, T, E, F;
M1.at<double>(0,2) = M2.at<double>(0,2)=(imgsize.width-1)*0.5;
M1.at<double>(1,2) = M2.at<double>(1,2)=(imgsize.height-1)*0.5;
D1 = Scalar::all(0);
D2 = Scalar::all(0);
double err = calibrateStereoCamera(objpt, imgpt1, imgpt2, M1, D1, M2, D2, imgsize, R, T, E, F,
TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 30, 1e-6),
CV_CALIB_SAME_FOCAL_LENGTH
//+ CV_CALIB_FIX_ASPECT_RATIO
+ CV_CALIB_FIX_PRINCIPAL_POINT
+ CV_CALIB_ZERO_TANGENT_DIST
+ CV_CALIB_FIX_K3
+ CV_CALIB_FIX_K4 + CV_CALIB_FIX_K5 //+ CV_CALIB_FIX_K6
);
err /= nframes*npoints;
if( err > maxReprojErr )
{
ts->printf( cvtest::TS::LOG, "The average reprojection error is too big (=%g), testcase %d\n", err, testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
return;
}
Mat R1, R2, P1, P2, Q;
Rect roi1, roi2;
rectify(M1, D1, M2, D2, imgsize, R, T, R1, R2, P1, P2, Q, 1, imgsize, &roi1, &roi2, 0);
Mat eye33 = Mat::eye(3,3,CV_64F);
Mat R1t = R1.t(), R2t = R2.t();
if( cvtest::norm(R1t*R1 - eye33, NORM_L2) > 0.01 ||
cvtest::norm(R2t*R2 - eye33, NORM_L2) > 0.01 ||
abs(determinant(F)) > 0.01)
{
ts->printf( cvtest::TS::LOG, "The computed (by rectify) R1 and R2 are not orthogonal,"
"or the computed (by calibrate) F is not singular, testcase %d\n", testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
return;
}
if(!checkPandROI(testcase, M1, D1, R1, P1, imgsize, roi1))
{
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
return;
}
if(!checkPandROI(testcase, M2, D2, R2, P2, imgsize, roi2))
{
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
return;
}
//check that Tx after rectification is equal to distance between cameras
double tx = fabs(P2.at<double>(0, 3) / P2.at<double>(0, 0));
if (fabs(tx - cvtest::norm(T, NORM_L2)) > 1e-5)
{
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
return;
}
//check that Q reprojects points before the camera
double testPoint[4] = {0.0, 0.0, 100.0, 1.0};
Mat reprojectedTestPoint = Q * Mat_<double>(4, 1, testPoint);
CV_Assert(reprojectedTestPoint.type() == CV_64FC1);
if( reprojectedTestPoint.at<double>(2) / reprojectedTestPoint.at<double>(3) < 0 )
{
ts->printf( cvtest::TS::LOG, "A point after rectification is reprojected behind the camera, testcase %d\n", testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
}
//check that Q reprojects the same points as reconstructed by triangulation
const float minCoord = -300.0f;
const float maxCoord = 300.0f;
const float minDisparity = 0.1f;
const float maxDisparity = 60.0f;
const int pointsCount = 500;
const float requiredAccuracy = 1e-3f;
const float allowToFail = 0.2f; // 20%
RNG& rng = ts->get_rng();
Mat projectedPoints_1(2, pointsCount, CV_32FC1);
Mat projectedPoints_2(2, pointsCount, CV_32FC1);
Mat disparities(1, pointsCount, CV_32FC1);
rng.fill(projectedPoints_1, RNG::UNIFORM, minCoord, maxCoord);
rng.fill(disparities, RNG::UNIFORM, minDisparity, maxDisparity);
projectedPoints_2.row(0) = projectedPoints_1.row(0) - disparities;
Mat ys_2 = projectedPoints_2.row(1);
projectedPoints_1.row(1).copyTo(ys_2);
Mat points4d;
triangulate(P1, P2, projectedPoints_1, projectedPoints_2, points4d);
Mat homogeneousPoints4d = points4d.t();
const int dimension = 4;
homogeneousPoints4d = homogeneousPoints4d.reshape(dimension);
Mat triangulatedPoints;
convertPointsFromHomogeneous(homogeneousPoints4d, triangulatedPoints);
Mat sparsePoints;
sparsePoints.push_back(projectedPoints_1);
sparsePoints.push_back(disparities);
sparsePoints = sparsePoints.t();
sparsePoints = sparsePoints.reshape(3);
Mat reprojectedPoints;
perspectiveTransform(sparsePoints, reprojectedPoints, Q);
Mat diff;
absdiff(triangulatedPoints, reprojectedPoints, diff);
Mat mask = diff > requiredAccuracy;
mask = mask.reshape(1);
mask = mask.col(0) | mask.col(1) | mask.col(2);
int numFailed = countNonZero(mask);
#if 0
std::cout << "numFailed=" << numFailed << std::endl;
for (int i = 0; i < triangulatedPoints.rows; i++)
{
if (mask.at<uchar>(i))
{
// failed points usually have 'w'~0 (points4d[3])
std::cout << "i=" << i << " triangulatePoints=" << triangulatedPoints.row(i) << " reprojectedPoints=" << reprojectedPoints.row(i) << std::endl <<
" points4d=" << points4d.col(i).t() << " projectedPoints_1=" << projectedPoints_1.col(i).t() << " disparities=" << disparities.col(i).t() << std::endl;
}
}
#endif
if (numFailed >= allowToFail * pointsCount)
{
ts->printf( cvtest::TS::LOG, "Points reprojected with a matrix Q and points reconstructed by triangulation are different (tolerance=%g, failed=%d), testcase %d\n",
requiredAccuracy, numFailed, testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
}
//check correctMatches
const float constraintAccuracy = 1e-5f;
Mat newPoints1, newPoints2;
Mat points1 = projectedPoints_1.t();
points1 = points1.reshape(2, 1);
Mat points2 = projectedPoints_2.t();
points2 = points2.reshape(2, 1);
correctMatches(F, points1, points2, newPoints1, newPoints2);
Mat newHomogeneousPoints1, newHomogeneousPoints2;
convertPointsToHomogeneous(newPoints1, newHomogeneousPoints1);
convertPointsToHomogeneous(newPoints2, newHomogeneousPoints2);
newHomogeneousPoints1 = newHomogeneousPoints1.reshape(1);
newHomogeneousPoints2 = newHomogeneousPoints2.reshape(1);
Mat typedF;
F.convertTo(typedF, newHomogeneousPoints1.type());
for (int i = 0; i < newHomogeneousPoints1.rows; ++i)
{
Mat error = newHomogeneousPoints2.row(i) * typedF * newHomogeneousPoints1.row(i).t();
CV_Assert(error.rows == 1 && error.cols == 1);
if (cvtest::norm(error, NORM_L2) > constraintAccuracy)
{
ts->printf( cvtest::TS::LOG, "Epipolar constraint is violated after correctMatches, testcase %d\n", testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
}
}
// rectifyUncalibrated
CV_Assert( imgpt1.size() == imgpt2.size() );
Mat _imgpt1( total, 1, CV_32FC2 ), _imgpt2( total, 1, CV_32FC2 );
vector<vector<Point2f> >::const_iterator iit1 = imgpt1.begin();
vector<vector<Point2f> >::const_iterator iit2 = imgpt2.begin();
for( int pi = 0; iit1 != imgpt1.end(); ++iit1, ++iit2 )
{
vector<Point2f>::const_iterator pit1 = iit1->begin();
vector<Point2f>::const_iterator pit2 = iit2->begin();
CV_Assert( iit1->size() == iit2->size() );
for( ; pit1 != iit1->end(); ++pit1, ++pit2, pi++ )
{
_imgpt1.at<Point2f>(pi,0) = Point2f( pit1->x, pit1->y );
_imgpt2.at<Point2f>(pi,0) = Point2f( pit2->x, pit2->y );
}
}
Mat _M1, _M2, _D1, _D2;
vector<Mat> _R1, _R2, _T1, _T2;
calibrateCamera( objpt, imgpt1, imgsize, _M1, _D1, _R1, _T1, 0 );
calibrateCamera( objpt, imgpt2, imgsize, _M2, _D2, _R2, _T2, 0 );
undistortPoints( _imgpt1, _imgpt1, _M1, _D1, Mat(), _M1 );
undistortPoints( _imgpt2, _imgpt2, _M2, _D2, Mat(), _M2 );
Mat matF, _H1, _H2;
matF = findFundamentalMat( _imgpt1, _imgpt2 );
rectifyUncalibrated( _imgpt1, _imgpt2, matF, imgsize, _H1, _H2 );
Mat rectifPoints1, rectifPoints2;
perspectiveTransform( _imgpt1, rectifPoints1, _H1 );
perspectiveTransform( _imgpt2, rectifPoints2, _H2 );
bool verticalStereo = abs(P2.at<double>(0,3)) < abs(P2.at<double>(1,3));
double maxDiff_c = 0, maxDiff_uc = 0;
for( int i = 0, k = 0; i < nframes; i++ )
{
vector<Point2f> temp[2];
undistortPoints(Mat(imgpt1[i]), temp[0], M1, D1, R1, P1);
undistortPoints(Mat(imgpt2[i]), temp[1], M2, D2, R2, P2);
for( int j = 0; j < npoints; j++, k++ )
{
double diff_c = verticalStereo ? abs(temp[0][j].x - temp[1][j].x) : abs(temp[0][j].y - temp[1][j].y);
Point2f d = rectifPoints1.at<Point2f>(k,0) - rectifPoints2.at<Point2f>(k,0);
double diff_uc = verticalStereo ? abs(d.x) : abs(d.y);
maxDiff_c = max(maxDiff_c, diff_c);
maxDiff_uc = max(maxDiff_uc, diff_uc);
if( maxDiff_c > maxScanlineDistErr_c )
{
ts->printf( cvtest::TS::LOG, "The distance between %s coordinates is too big(=%g) (used calibrated stereo), testcase %d\n",
verticalStereo ? "x" : "y", diff_c, testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
return;
}
if( maxDiff_uc > maxScanlineDistErr_uc )
{
ts->printf( cvtest::TS::LOG, "The distance between %s coordinates is too big(=%g) (used uncalibrated stereo), testcase %d\n",
verticalStereo ? "x" : "y", diff_uc, testcase);
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
return;
}
}
}
ts->printf( cvtest::TS::LOG, "Testcase %d. Max distance (calibrated) =%g\n"
"Max distance (uncalibrated) =%g\n", testcase, maxDiff_c, maxDiff_uc );
}
}
//-------------------------------- CV_StereoCalibrationTest_C ------------------------------
class CV_StereoCalibrationTest_C : public CV_StereoCalibrationTest
{
public:
CV_StereoCalibrationTest_C() {}
protected:
virtual double calibrateStereoCamera( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints1,
const vector<vector<Point2f> >& imagePoints2,
Mat& cameraMatrix1, Mat& distCoeffs1,
Mat& cameraMatrix2, Mat& distCoeffs2,
Size imageSize, Mat& R, Mat& T,
Mat& E, Mat& F, TermCriteria criteria, int flags );
virtual void rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,
const Mat& cameraMatrix2, const Mat& distCoeffs2,
Size imageSize, const Mat& R, const Mat& T,
Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q,
double alpha, Size newImageSize,
Rect* validPixROI1, Rect* validPixROI2, int flags );
virtual bool rectifyUncalibrated( const Mat& points1,
const Mat& points2, const Mat& F, Size imgSize,
Mat& H1, Mat& H2, double threshold=5 );
virtual void triangulate( const Mat& P1, const Mat& P2,
const Mat &points1, const Mat &points2,
Mat &points4D );
virtual void correct( const Mat& F,
const Mat &points1, const Mat &points2,
Mat &newPoints1, Mat &newPoints2 );
};
double CV_StereoCalibrationTest_C::calibrateStereoCamera( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints1,
const vector<vector<Point2f> >& imagePoints2,
Mat& cameraMatrix1, Mat& distCoeffs1,
Mat& cameraMatrix2, Mat& distCoeffs2,
Size imageSize, Mat& R, Mat& T,
Mat& E, Mat& F, TermCriteria criteria, int flags )
{
cameraMatrix1.create( 3, 3, CV_64F );
cameraMatrix2.create( 3, 3, CV_64F);
distCoeffs1.create( 1, 5, CV_64F);
distCoeffs2.create( 1, 5, CV_64F);
R.create(3, 3, CV_64F);
T.create(3, 1, CV_64F);
E.create(3, 3, CV_64F);
F.create(3, 3, CV_64F);
int nimages = (int)objectPoints.size(), total = 0;
for( int i = 0; i < nimages; i++ )
{
total += (int)objectPoints[i].size();
}
Mat npoints( 1, nimages, CV_32S ),
objPt( 1, total, traits::Type<Point3f>::value ),
imgPt( 1, total, traits::Type<Point2f>::value ),
imgPt2( 1, total, traits::Type<Point2f>::value );
Point2f* imgPtData2 = imgPt2.ptr<Point2f>();
Point3f* objPtData = objPt.ptr<Point3f>();
Point2f* imgPtData = imgPt.ptr<Point2f>();
for( int i = 0, ni = 0, j = 0; i < nimages; i++, j += ni )
{
ni = (int)objectPoints[i].size();
npoints.ptr<int>()[i] = ni;
std::copy(objectPoints[i].begin(), objectPoints[i].end(), objPtData + j);
std::copy(imagePoints1[i].begin(), imagePoints1[i].end(), imgPtData + j);
std::copy(imagePoints2[i].begin(), imagePoints2[i].end(), imgPtData2 + j);
}
CvMat _objPt = cvMat(objPt), _imgPt = cvMat(imgPt), _imgPt2 = cvMat(imgPt2), _npoints = cvMat(npoints);
CvMat _cameraMatrix1 = cvMat(cameraMatrix1), _distCoeffs1 = cvMat(distCoeffs1);
CvMat _cameraMatrix2 = cvMat(cameraMatrix2), _distCoeffs2 = cvMat(distCoeffs2);
CvMat matR = cvMat(R), matT = cvMat(T), matE = cvMat(E), matF = cvMat(F);
return cvStereoCalibrate(&_objPt, &_imgPt, &_imgPt2, &_npoints, &_cameraMatrix1,
&_distCoeffs1, &_cameraMatrix2, &_distCoeffs2, cvSize(imageSize),
&matR, &matT, &matE, &matF, flags, cvTermCriteria(criteria));
}
void CV_StereoCalibrationTest_C::rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,
const Mat& cameraMatrix2, const Mat& distCoeffs2,
Size imageSize, const Mat& R, const Mat& T,
Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q,
double alpha, Size newImageSize,
Rect* validPixROI1, Rect* validPixROI2, int flags )
{
int rtype = CV_64F;
R1.create(3, 3, rtype);
R2.create(3, 3, rtype);
P1.create(3, 4, rtype);
P2.create(3, 4, rtype);
Q.create(4, 4, rtype);
CvMat _cameraMatrix1 = cvMat(cameraMatrix1), _distCoeffs1 = cvMat(distCoeffs1);
CvMat _cameraMatrix2 = cvMat(cameraMatrix2), _distCoeffs2 = cvMat(distCoeffs2);
CvMat matR = cvMat(R), matT = cvMat(T), _R1 = cvMat(R1), _R2 = cvMat(R2), _P1 = cvMat(P1), _P2 = cvMat(P2), matQ = cvMat(Q);
cvStereoRectify( &_cameraMatrix1, &_cameraMatrix2, &_distCoeffs1, &_distCoeffs2,
cvSize(imageSize), &matR, &matT, &_R1, &_R2, &_P1, &_P2, &matQ, flags,
alpha, cvSize(newImageSize), (CvRect*)validPixROI1, (CvRect*)validPixROI2);
}
bool CV_StereoCalibrationTest_C::rectifyUncalibrated( const Mat& points1,
const Mat& points2, const Mat& F, Size imgSize, Mat& H1, Mat& H2, double threshold )
{
H1.create(3, 3, CV_64F);
H2.create(3, 3, CV_64F);
CvMat _pt1 = cvMat(points1), _pt2 = cvMat(points2), matF, *pF=0, _H1 = cvMat(H1), _H2 = cvMat(H2);
if( F.size() == Size(3, 3) )
pF = &(matF = cvMat(F));
return cvStereoRectifyUncalibrated(&_pt1, &_pt2, pF, cvSize(imgSize), &_H1, &_H2, threshold) > 0;
}
void CV_StereoCalibrationTest_C::triangulate( const Mat& P1, const Mat& P2,
const Mat &points1, const Mat &points2,
Mat &points4D )
{
CvMat _P1 = cvMat(P1), _P2 = cvMat(P2), _points1 = cvMat(points1), _points2 = cvMat(points2);
points4D.create(4, points1.cols, points1.type());
CvMat _points4D = cvMat(points4D);
cvTriangulatePoints(&_P1, &_P2, &_points1, &_points2, &_points4D);
}
void CV_StereoCalibrationTest_C::correct( const Mat& F,
const Mat &points1, const Mat &points2,
Mat &newPoints1, Mat &newPoints2 )
{
CvMat _F = cvMat(F), _points1 = cvMat(points1), _points2 = cvMat(points2);
newPoints1.create(1, points1.cols, points1.type());
newPoints2.create(1, points2.cols, points2.type());
CvMat _newPoints1 = cvMat(newPoints1), _newPoints2 = cvMat(newPoints2);
cvCorrectMatches(&_F, &_points1, &_points2, &_newPoints1, &_newPoints2);
}
//-------------------------------- CV_StereoCalibrationTest_CPP ------------------------------
class CV_StereoCalibrationTest_CPP : public CV_StereoCalibrationTest
{
public:
CV_StereoCalibrationTest_CPP() {}
protected:
virtual double calibrateStereoCamera( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints1,
const vector<vector<Point2f> >& imagePoints2,
Mat& cameraMatrix1, Mat& distCoeffs1,
Mat& cameraMatrix2, Mat& distCoeffs2,
Size imageSize, Mat& R, Mat& T,
Mat& E, Mat& F, TermCriteria criteria, int flags );
virtual void rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,
const Mat& cameraMatrix2, const Mat& distCoeffs2,
Size imageSize, const Mat& R, const Mat& T,
Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q,
double alpha, Size newImageSize,
Rect* validPixROI1, Rect* validPixROI2, int flags );
virtual bool rectifyUncalibrated( const Mat& points1,
const Mat& points2, const Mat& F, Size imgSize,
Mat& H1, Mat& H2, double threshold=5 );
virtual void triangulate( const Mat& P1, const Mat& P2,
const Mat &points1, const Mat &points2,
Mat &points4D );
virtual void correct( const Mat& F,
const Mat &points1, const Mat &points2,
Mat &newPoints1, Mat &newPoints2 );
};
double CV_StereoCalibrationTest_CPP::calibrateStereoCamera( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints1,
const vector<vector<Point2f> >& imagePoints2,
Mat& cameraMatrix1, Mat& distCoeffs1,
Mat& cameraMatrix2, Mat& distCoeffs2,
Size imageSize, Mat& R, Mat& T,
Mat& E, Mat& F, TermCriteria criteria, int flags )
{
return stereoCalibrate( objectPoints, imagePoints1, imagePoints2,
cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2,
imageSize, R, T, E, F, flags, criteria );
}
void CV_StereoCalibrationTest_CPP::rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,
const Mat& cameraMatrix2, const Mat& distCoeffs2,
Size imageSize, const Mat& R, const Mat& T,
Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q,
double alpha, Size newImageSize,
Rect* validPixROI1, Rect* validPixROI2, int flags )
{
stereoRectify( cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2,
imageSize, R, T, R1, R2, P1, P2, Q, flags, alpha, newImageSize,validPixROI1, validPixROI2 );
}
bool CV_StereoCalibrationTest_CPP::rectifyUncalibrated( const Mat& points1,
const Mat& points2, const Mat& F, Size imgSize, Mat& H1, Mat& H2, double threshold )
{
return stereoRectifyUncalibrated( points1, points2, F, imgSize, H1, H2, threshold );
}
void CV_StereoCalibrationTest_CPP::triangulate( const Mat& P1, const Mat& P2,
const Mat &points1, const Mat &points2,
Mat &points4D )
{
triangulatePoints(P1, P2, points1, points2, points4D);
}
void CV_StereoCalibrationTest_CPP::correct( const Mat& F,
const Mat &points1, const Mat &points2,
Mat &newPoints1, Mat &newPoints2 )
{
correctMatches(F, points1, points2, newPoints1, newPoints2);
}
///////////////////////////////////////////////////////////////////////////////////////////////////
TEST(Calib3d_CalibrateCamera_C, regression) { CV_CameraCalibrationTest_C test; test.safe_run(); }
TEST(Calib3d_CalibrateCamera_CPP, regression) { CV_CameraCalibrationTest_CPP test; test.safe_run(); }
TEST(Calib3d_CalibrationMatrixValues_C, accuracy) { CV_CalibrationMatrixValuesTest_C test; test.safe_run(); }
TEST(Calib3d_CalibrationMatrixValues_CPP, accuracy) { CV_CalibrationMatrixValuesTest_CPP test; test.safe_run(); }
TEST(Calib3d_ProjectPoints_C, accuracy) { CV_ProjectPointsTest_C test; test.safe_run(); }
TEST(Calib3d_ProjectPoints_CPP, regression) { CV_ProjectPointsTest_CPP test; test.safe_run(); }
TEST(Calib3d_StereoCalibrate_C, regression) { CV_StereoCalibrationTest_C test; test.safe_run(); }
TEST(Calib3d_StereoCalibrate_CPP, regression) { CV_StereoCalibrationTest_CPP test; test.safe_run(); }
TEST(Calib3d_StereoCalibrateCorner, regression) { CV_StereoCalibrationCornerTest test; test.safe_run(); }
TEST(Calib3d_StereoCalibrate_CPP, extended)
{
cvtest::TS* ts = cvtest::TS::ptr();
String filepath = cv::format("%scv/stereo/case%d/", ts->get_data_path().c_str(), 1 );
Mat left = imread(filepath+"left01.png");
Mat right = imread(filepath+"right01.png");
if(left.empty() || right.empty())
{
ts->set_failed_test_info( cvtest::TS::FAIL_MISSING_TEST_DATA );
return;
}
vector<vector<Point2f> > imgpt1(1), imgpt2(1);
vector<vector<Point3f> > objpt(1);
Size patternSize(9, 6), imageSize(640, 480);
bool found1 = findChessboardCorners(left, patternSize, imgpt1[0]);
bool found2 = findChessboardCorners(right, patternSize, imgpt2[0]);
if(!found1 || !found2)
{
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
return;
}
for( int j = 0; j < patternSize.width*patternSize.height; j++ )
objpt[0].push_back(Point3f((float)(j%patternSize.width), (float)(j/patternSize.width), 0.f));
Mat K1, K2, c1, c2, R, T, E, F, err;
int flags = 0;
double res0 = stereoCalibrate( objpt, imgpt1, imgpt2,
K1, c1, K2, c2,
imageSize, R, T, E, F, err, flags);
flags = CALIB_USE_EXTRINSIC_GUESS;
double res1 = stereoCalibrate( objpt, imgpt1, imgpt2,
K1, c1, K2, c2,
imageSize, R, T, E, F, err, flags);
EXPECT_LE(res1, res0);
EXPECT_TRUE(err.total() == 2);
}
TEST(Calib3d_StereoCalibrate, regression_10791)
{
const Matx33d M1(
853.1387981631528, 0, 704.154907802121,
0, 853.6445089162528, 520.3600712930319,
0, 0, 1
);
const Matx33d M2(
848.6090216909176, 0, 701.6162856852185,
0, 849.7040162357157, 509.1864036137,
0, 0, 1
);
const Matx<double, 14, 1> D1(-6.463598629567206, 79.00104930508179, -0.0001006144444464403, -0.0005437499822299972,
12.56900616588467, -6.056719942752855, 76.3842481414836, 45.57460250612659,
0, 0, 0, 0, 0, 0);
const Matx<double, 14, 1> D2(0.6123436439798265, -0.4671756923224087, -0.0001261947899033442, -0.000597334584036978,
-0.05660119809538371, 1.037075740629769, -0.3076042835831711, -0.2502169324283623,
0, 0, 0, 0, 0, 0);
const Matx33d R(
0.9999926627018476, -0.0001095586963765905, 0.003829169539302921,
0.0001021735876758584, 0.9999981346680941, 0.0019287874145156,
-0.003829373712065528, -0.001928382022437616, 0.9999908085776333
);
const Matx31d T(-58.9161771697128, -0.01581306249996402, -0.8492960216760961);
const Size imageSize(1280, 960);
Mat R1, R2, P1, P2, Q;
Rect roi1, roi2;
stereoRectify(M1, D1, M2, D2, imageSize, R, T,
R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 1, imageSize, &roi1, &roi2);
EXPECT_GE(roi1.area(), 400*300) << roi1;
EXPECT_GE(roi2.area(), 400*300) << roi2;
}
TEST(Calib3d_Triangulate, accuracy)
{
// the testcase from http://code.opencv.org/issues/4334
{
double P1data[] = { 250, 0, 200, 0, 0, 250, 150, 0, 0, 0, 1, 0 };
double P2data[] = { 250, 0, 200, -250, 0, 250, 150, 0, 0, 0, 1, 0 };
Mat P1(3, 4, CV_64F, P1data), P2(3, 4, CV_64F, P2data);
float x1data[] = { 200.f, 0.f };
float x2data[] = { 170.f, 1.f };
float Xdata[] = { 0.f, -5.f, 25/3.f };
Mat x1(2, 1, CV_32F, x1data);
Mat x2(2, 1, CV_32F, x2data);
Mat res0(1, 3, CV_32F, Xdata);
Mat res_, res;
triangulatePoints(P1, P2, x1, x2, res_);
cv::transpose(res_, res_); // TODO cvtest (transpose doesn't support inplace)
convertPointsFromHomogeneous(res_, res);
res = res.reshape(1, 1);
cout << "[1]:" << endl;
cout << "\tres0: " << res0 << endl;
cout << "\tres: " << res << endl;
ASSERT_LE(cvtest::norm(res, res0, NORM_INF), 1e-1);
}
// another testcase http://code.opencv.org/issues/3461
{
Matx33d K1(6137.147949, 0.000000, 644.974609,
0.000000, 6137.147949, 573.442749,
0.000000, 0.000000, 1.000000);
Matx33d K2(6137.147949, 0.000000, 644.674438,
0.000000, 6137.147949, 573.079834,
0.000000, 0.000000, 1.000000);
Matx34d RT1(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0);
Matx34d RT2(0.998297, 0.0064108, -0.0579766, 143.614334,
-0.0065818, 0.999975, -0.00275888, -5.160085,
0.0579574, 0.00313577, 0.998314, 96.066109);
Matx34d P1 = K1*RT1;
Matx34d P2 = K2*RT2;
float x1data[] = { 438.f, 19.f };
float x2data[] = { 452.363600f, 16.452225f };
float Xdata[] = { -81.049530f, -215.702804f, 2401.645449f };
Mat x1(2, 1, CV_32F, x1data);
Mat x2(2, 1, CV_32F, x2data);
Mat res0(1, 3, CV_32F, Xdata);
Mat res_, res;
triangulatePoints(P1, P2, x1, x2, res_);
cv::transpose(res_, res_); // TODO cvtest (transpose doesn't support inplace)
convertPointsFromHomogeneous(res_, res);
res = res.reshape(1, 1);
cout << "[2]:" << endl;
cout << "\tres0: " << res0 << endl;
cout << "\tres: " << res << endl;
ASSERT_LE(cvtest::norm(res, res0, NORM_INF), 2);
}
}
}} // namespace