stereo-vision
Public Member Functions | Data Fields
StereoCamera Class Reference

The base class defining stereo camera. More...

#include <stereoCamera.h>

Public Member Functions

 StereoCamera (bool rectify=true)
 Default Constructor. More...
 
 StereoCamera (yarp::os::ResourceFinder &rf, bool rectify=true)
 Costructor for initialization from file. More...
 
 StereoCamera (Camera First, Camera Second, bool rectify=true)
 Constructor for initialization using two calibrated cameras. More...
 
void initELAS (yarp::os::ResourceFinder &rf)
 Initialization of ELAS parameters. More...
 
void stereoCalibration (vector< string > imageList, int boardWidth, int boardHeight, float sqsize=1.0)
 It performs the stereo camera calibration. More...
 
void saveCalibration (string extrinsicFilePath, string intrinsicFilePath)
 It saves the calibration. More...
 
void setImages (const Mat &firstImg, const Mat &secondImg)
 It stores in memory a couple of images. More...
 
cv::Mat findMatch (bool visualize=false, double displacement=20.0, double radius=200.0)
 It finds matches between two images. More...
 
void computeDisparity (bool best=true, int uniquenessRatio=15, int speckleWindowSize=50, int speckleRange=16, int numberOfDisparities=64, int SADWindowSize=7, int minDisparity=0, int preFilterCap=63, int disp12MaxDiff=0)
 It computes the Disparity Map (see stereoDisparity). More...
 
void undistortImages ()
 It undistorts the images. More...
 
void horn (Mat &K1, Mat &K2, vector< Point2f > &Points1, vector< Point2f > &Points2, Mat &Rot, Mat &Tras)
 It performs the horn relative orientations algorithm i.e. More...
 
void hornRelativeOrientations ()
 It performs the horn relative orientations, all the parameters are assumed initialized in the StereoCamera object. More...
 
Point3f triangulation (Point2f &point1, Point2f &point2)
 It performs the triangulation using the stored in the internal P1 and P2 3x4 Camera Matrices. More...
 
Point3f triangulation (Point2f &point1, Point2f &point2, Mat Camera1, Mat Camera2)
 It performs the triangulation (HZ Chap 12.2 homogenous solution). More...
 
Point3f triangulationLS (Point2f &point1, Point2f &point2, Mat Camera1, Mat Camera2)
 It performs the least square triangulation (HZ Chap 12.2 Inhomogenous solution). More...
 
Point3f metricTriangulation (Point2f &point1, double thMeters=10)
 It performs the metric triangulation given the pixel coordinates on the first image. More...
 
Point3f metricTriangulation (Point2f &point1, Mat &H, double thMeters=10)
 It performs the metric triangulation given the pixel coordinates on the first image. More...
 
Point3f triangulateKnownDisparity (float u, float v, float d, Mat &H)
 It performs the metric triangulation given the pixel coordinates on the first image and the disparity between the two RECTIFIED images. More...
 
void estimateEssential ()
 It estimates the essential matrix (3x3) E between two views. More...
 
bool essentialDecomposition ()
 It decomposes the essential matrix in Rotation and Translation between the two views. More...
 
void chierality (Mat &R1, Mat &R2, Mat &t1, Mat &t2, Mat &R, Mat &t, vector< Point2f > points1, vector< Point2f > points2)
 It performs the chierality test: given a couple of rotation matrices, translation vectors and matches it finds the correct rotation and translation s.t. More...
 
const Mat & getImLeft () const
 It returns the left (first) image. More...
 
const Mat & getImRight () const
 It returns the right (second) image. More...
 
const Mat & getImLeftUnd () const
 It returns the left undistorted image. More...
 
const Mat & getImRightUnd () const
 It returns the right undistorted image. More...
 
const Mat & getDisparity () const
 It returns the disparity image. More...
 
const Mat & getDisparity16 () const
 It returns the disparity image. More...
 
const Mat & getQ () const
 It returns the 4x4 disparity-to-depth mapping matrix. More...
 
const Mat & getKleft () const
 It returns the 3x3 left camera matrix. More...
 
const Mat & getKright () const
 It returns the 3x3 right camera matrix. More...
 
const Mat & getFundamental () const
 It returns the 3x3 fundamental matrix. More...
 
const vector< Point2f > & getMatchLeft () const
 It returns the pixel coordinates of the matches in the left image. More...
 
const vector< Point2f > & getMatchRight () const
 It returns the pixel coordinates of the matches in the right image. More...
 
const Mat & getTranslation () const
 It returns the translation vector between the two cameras. More...
 
const Mat & getRotation () const
 It returns the rotation matrix between the two cameras. More...
 
const Mat & getMapperL () const
 It returns the mapping between the original left camera and the rectified left camera. More...
 
const Mat & getMapperR () const
 It returns the mapping between the original right camera and the rectified right camera. More...
 
const Mat & getRLrect () const
 It returns the rotation matrix between the original left camera and the rectified left camera. More...
 
const Mat & getRRrect () const
 It returns the rotation matrix between the original right camera and the rectified right camera. More...
 
void setRotation (Mat &Rot, int mode=0)
 It sets the rotation matrix (if known) between the first and the second camera. More...
 
void setTranslation (Mat &Tras, int mul=0)
 It sets the translation vector (if known) between the first and the second camera. More...
 
void setIntrinsics (Mat &K1, Mat &K2, Mat &Dist1, Mat &Dist2)
 It sets the intrinsic parameters. More...
 
void rectifyImages ()
 The method rectifies the two images: it transform each image plane such that pairs conjugate epipolar lines become collinear and parallel to one of the image axes (i.e. More...
 
const Mat & getLRectified () const
 The method returns the first rectified image. More...
 
const Mat & getRRectified () const
 The method returns the second rectified image. More...
 
vector< Point2f > projectPoints3D (string camera, vector< Point3f > &points3D, Mat &H)
 The method returns the 2D projection of a set of 3D points in the cartesian space to the specified camera. More...
 
Mat computeWorldImage (Mat &H)
 The method returns a 3-Channels float image with the world coordinates w.r.t H reference system. More...
 
const Mat & getDistCoeffRight () const
 It returns the 5x1 right distortion coefficients. More...
 
const Mat & getDistCoeffLeft () const
 It returns the 5x1 left distortion coefficients. More...
 
Point2f getDistortedPixel (int u, int v, int cam=1)
 Given the u,v pixel coordinates in the undistorted image the method returns the original position of the pixel in the distorted frame. More...
 
Mat drawMatches ()
 The method returns a 3-Channels 8bit image with the image matches. More...
 
void setMatches (std::vector< cv::Point2f > &pointsL, std::vector< cv::Point2f > &pointsR)
 The function initialize the matches of the current image pair. More...
 
void setExpectedPosition (Mat &Rot, Mat &Tran)
 The function set the expected Rotation and Translation parameters for the current image pair. More...
 
Mat FfromP (Mat &P1, Mat &P2)
 The function computes the fundamental matrix starting from known camera matrices. More...
 
Point2f fromRectifiedToOriginal (int u, int v, int camera)
 Given the u,v pixel coordinates in the rectified image the method returns the position of the pixel in the non-rectified frame. More...
 
cv::Mat remapDisparity (cv::Mat disp)
 Remaps the disparity map on the basis of the mapping previously computed. More...
 
void updateMappings ()
 XXXXXXXXXXXXXXXXXXXX. More...
 

Data Fields

Mat filtered_disp
 

Detailed Description

The base class defining stereo camera.

It allows to calibrate the cameras, to undistort a pair of images, to find matches between two images, to triangulate points and to estimate motion between two images. The basic assumption is that the two images come from a stereo camera, however this class works also with two arbitrary images.

Definition at line 91 of file stereoCamera.h.

Constructor & Destructor Documentation

◆ StereoCamera() [1/3]

StereoCamera::StereoCamera ( bool  rectify = true)

Default Constructor.

You should initialize all the intrinsic and extrinsic parameters using the provided methods.

Definition at line 116 of file stereoCamera.cpp.

116  {
117  this->rectify=rectify;
118  this->epipolarTh=0.01;
119  this->cameraChanged = true;
120 
121 #if !defined(USING_GPU) && !defined(OPENCV_GREATER_2)
122  cv::initModule_nonfree();
123 #endif
124 
125 }

◆ StereoCamera() [2/3]

StereoCamera::StereoCamera ( yarp::os::ResourceFinder &  rf,
bool  rectify = true 
)

Costructor for initialization from file.

Parameters
rfis the config file generated by the stereoCalib module.

Definition at line 128 of file stereoCamera.cpp.

128  {
129  Mat KL, KR, DistL, DistR, R, T;
130  loadStereoParameters(rf,KL,KR,DistL,DistR,R,T);
131  this->setIntrinsics(KL,KR,DistL,DistR);
132  this->setRotation(R,0);
133  this->setTranslation(T,0);
134 
135  this->cameraChanged=true;
136  this->epipolarTh=0.01;
137  this->rectify=rectify;
138  buildUndistortRemap();
139 
140 #if !defined(USING_GPU) && !defined(OPENCV_GREATER_2)
141  cv::initModule_nonfree();
142 #endif
143 
144 }
void setRotation(Mat &Rot, int mode=0)
It sets the rotation matrix (if known) between the first and the second camera.
void setIntrinsics(Mat &K1, Mat &K2, Mat &Dist1, Mat &Dist2)
It sets the intrinsic parameters.
void setTranslation(Mat &Tras, int mul=0)
It sets the translation vector (if known) between the first and the second camera.

References setIntrinsics(), setRotation(), and setTranslation().

◆ StereoCamera() [3/3]

StereoCamera::StereoCamera ( Camera  First,
Camera  Second,
bool  rectify = true 
)

Constructor for initialization using two calibrated cameras.

Note
Only intrinsic parameters are initialized.
Parameters
Firstthe first camera (Left eye is assumed but you can use any arbitrary camera). The 3D point coordinates have this reference system.
Secondthe second camera (Right eye is assumed).

Definition at line 147 of file stereoCamera.cpp.

147  {
148  this->Kleft=Left.getCameraMatrix();
149  this->DistL=Left.getDistVector();
150 
151  this->Kright=Right.getCameraMatrix();
152  this->DistR=Right.getDistVector();
153  this->cameraChanged=true;
154  this->rectify=rectify;
155  this->epipolarTh=0.01;
156  buildUndistortRemap();
157 
158 #if !defined(USING_GPU) && !defined(OPENCV_GREATER_2)
159  cv::initModule_nonfree();
160 #endif
161 
162 }

References Camera::getCameraMatrix(), and Camera::getDistVector().

Member Function Documentation

◆ chierality()

void StereoCamera::chierality ( Mat &  R1,
Mat &  R2,
Mat &  t1,
Mat &  t2,
Mat &  R,
Mat &  t,
vector< Point2f >  points1,
vector< Point2f >  points2 
)

It performs the chierality test: given a couple of rotation matrices, translation vectors and matches it finds the correct rotation and translation s.t.

the triangulated points have their depth coordinates greater than 0. The method is used by essentialDecomposition, indeed an essential matrix generates 2 rotations and 2 translation. The chierality test is needed in order to discard wrong rototranslations.

Parameters
R1first rotation 3x3 matrix
R2second rotation 3x3 matrix
t1first translation 3x1 matrix
t2second translation 3x1 matrix
Routput rotation matrix
toutput translation matrix
points1corrispondences in the first image
points2corrispondences in the second image

Definition at line 1171 of file stereoCamera.cpp.

1171  {
1172 
1173  Mat A= Mat::eye(3,4,CV_64FC1);
1174  Mat P1 = this->Kleft*Mat::eye(3, 4, CV_64F);
1175 
1176  for(int i = 0; i < R1.rows; i++)
1177  {
1178  double* Mi = A.ptr<double>(i);
1179  double* MRi = R1.ptr<double>(i);
1180  for(int j = 0; j < R1.cols; j++)
1181  Mi[j]=MRi[j];
1182  }
1183 
1184  for(int i = 0; i < t1.rows; i++)
1185  {
1186  double* Mi = A.ptr<double>(i);
1187  double* MRi = t1.ptr<double>(i);
1188  Mi[3]=MRi[0];
1189  }
1190 
1191  Mat P2=this->Kright*A;
1192  A= Mat::eye(3,4,CV_64FC1);
1193 
1194  for(int i = 0; i < R2.rows; i++)
1195  {
1196  double* Mi = A.ptr<double>(i);
1197  double* MRi = R2.ptr<double>(i);
1198  for(int j = 0; j < R2.cols; j++)
1199  Mi[j]=MRi[j];
1200  }
1201  for(int i = 0; i < t2.rows; i++)
1202  {
1203  double* Mi = A.ptr<double>(i);
1204  double* MRi = t2.ptr<double>(i);
1205  Mi[3]=MRi[0];
1206  }
1207  Mat P3=this->Kright*A;
1208  A= Mat::eye(3,4,CV_64FC1);
1209 
1210  for(int i = 0; i < R1.rows; i++)
1211  {
1212  double* Mi = A.ptr<double>(i);
1213  double* MRi = R1.ptr<double>(i);
1214  for(int j = 0; j < R1.cols; j++)
1215  Mi[j]=MRi[j];
1216  }
1217  for(int i = 0; i < t1.rows; i++)
1218  {
1219  double* Mi = A.ptr<double>(i);
1220  double* MRi = t2.ptr<double>(i);
1221  Mi[3]=MRi[0];
1222  }
1223  Mat P4=this->Kright*A;
1224  A= Mat::eye(3,4,CV_64FC1);
1225 
1226 
1227  for(int i = 0; i < R2.rows; i++)
1228  {
1229  double* Mi = A.ptr<double>(i);
1230  double* MRi = R2.ptr<double>(i);
1231  for(int j = 0; j < R2.cols; j++)
1232  Mi[j]=MRi[j];
1233  }
1234  for(int i = 0; i < t2.rows; i++)
1235  {
1236  double* Mi = A.ptr<double>(i);
1237  double* MRi = t1.ptr<double>(i);
1238  Mi[3]=MRi[0];
1239  }
1240  Mat P5=this->Kright*A;
1241 
1242  int err1=0; //R1 t1
1243  int err2=0; //R2 t2
1244  int err3=0; //R1 t2
1245  int err4=0; //R2 t1
1246  Mat point(4,1,CV_64FC1);
1247 
1248  for(int i=0; i<(int) InliersL.size(); i++)
1249  {
1250  Point3f point3D=triangulation(points1[i],points2[i],P1,P2);
1251  Mat H1=buildRotTras(R1,t1);
1252  point.at<double>(0,0)=point3D.x;
1253  point.at<double>(1,0)=point3D.y;
1254  point.at<double>(2,0)=point3D.z;
1255  point.at<double>(0,0)=1.0;
1256  Mat rotatedPoint=H1*point;
1257  //fprintf(stdout, "Camera P2 Point3D: %f %f %f Rotated Point: %f %f %f \n", point3D.x,point3D.y,point3D.z, rotatedPoint.at<double>(0,0),rotatedPoint.at<double>(1,0),rotatedPoint.at<double>(2,0));
1258 
1259  if(point3D.z<0 || rotatedPoint.at<double>(2,0)<0) {
1260  err1++;
1261  }
1262  point3D=triangulation(points1[i],points2[i],P1,P3);
1263  Mat H2=buildRotTras(R2,t2);
1264  point.at<double>(0,0)=point3D.x;
1265  point.at<double>(1,0)=point3D.y;
1266  point.at<double>(2,0)=point3D.z;
1267  point.at<double>(0,0)=1.0;
1268  rotatedPoint=H2*point;
1269  //fprintf(stdout, "Camera P3 Point3D: %f %f %f Rotated Point: %f %f %f \n", point3D.x,point3D.y,point3D.z, rotatedPoint.at<double>(0,0),rotatedPoint.at<double>(1,0),rotatedPoint.at<double>(2,0));
1270 
1271  if(point3D.z<0 || rotatedPoint.at<double>(2,0)<0) {
1272  err2++;
1273  }
1274 
1275  point3D=triangulation(points1[i],points2[i],P1,P4);
1276  Mat H3=buildRotTras(R1,t2);
1277  point.at<double>(0,0)=point3D.x;
1278  point.at<double>(1,0)=point3D.y;
1279  point.at<double>(2,0)=point3D.z;
1280  point.at<double>(0,0)=1.0;
1281  rotatedPoint=H3*point;
1282  //fprintf(stdout, "Camera P4 Point3D: %f %f %f Rotated Point: %f %f %f \n", point3D.x,point3D.y,point3D.z, rotatedPoint.at<double>(0,0),rotatedPoint.at<double>(1,0),rotatedPoint.at<double>(2,0));
1283 
1284  if(point3D.z<0 || rotatedPoint.at<double>(2,0)<0) {
1285  err3++;
1286  }
1287 
1288  point3D=triangulation(points1[i],points2[i],P1,P5);
1289  Mat H4=buildRotTras(R2,t1);
1290  point.at<double>(0,0)=point3D.x;
1291  point.at<double>(1,0)=point3D.y;
1292  point.at<double>(2,0)=point3D.z;
1293  point.at<double>(0,0)=1.0;
1294  rotatedPoint=H4*point;
1295  //fprintf(stdout, "Camera P5 Point3D: %f %f %f Rotated Point: %f %f %f \n", point3D.x,point3D.y,point3D.z, rotatedPoint.at<double>(0,0),rotatedPoint.at<double>(1,0),rotatedPoint.at<double>(2,0));
1296 
1297  if(point3D.z<0 || rotatedPoint.at<double>(2,0)<0) {
1298  err4++;
1299  }
1300 
1301  }
1302 
1303  /*printMatrix(R1);
1304  printMatrix(t1);
1305  printMatrix(R2);
1306  printMatrix(t2);*/
1307  //fprintf(stdout, "Inliers: %d, %d, \n",points1.size(),points2.size());
1308  //fprintf(stdout, "errors: %d, %d, %d, %d, \n",err1,err2,err3,err4);
1309 
1310  double minErr=10000;
1311  double secondErr=minErr;
1312 
1313  int idx=0;
1314  if(err1<minErr && t1.ptr<double>(0)[0]<0)
1315  {
1316  idx=1;
1317  secondErr=minErr;
1318  minErr=err1;
1319  }
1320 
1321  if(err2<minErr && t2.ptr<double>(0)[0]<0)
1322  {
1323  idx=2;
1324  secondErr=minErr;
1325  minErr=err2;
1326  }
1327  if(err3<minErr && t2.ptr<double>(0)[0]<0)
1328  {
1329  idx=3;
1330  secondErr=minErr;
1331  minErr=err3;
1332  }
1333  if(err4<minErr && t1.ptr<double>(0)[0]<0)
1334  {
1335  idx=4;
1336  secondErr=minErr;
1337  minErr=err4;
1338  }
1339 
1340  /*if(secondErr==minErr)
1341  {
1342  R=this->R;
1343  t=this->T;
1344  return;
1345  }*/
1346  if(idx==1) {
1347  R=R1;
1348  t=t1;
1349  return;
1350  }
1351  if(idx==2) {
1352  R=R2;
1353  t=t2;
1354  return;
1355  }
1356  if(idx==3) {
1357  R=R1;
1358  t=t2;
1359  return;
1360  }
1361  if(idx==4) {
1362  R=R2;
1363  t=t1;
1364  return;
1365  }
1366 
1367 }
Point3f triangulation(Point2f &point1, Point2f &point2)
It performs the triangulation using the stored in the internal P1 and P2 3x4 Camera Matrices.

References triangulation().

Referenced by essentialDecomposition().

◆ computeDisparity()

void StereoCamera::computeDisparity ( bool  best = true,
int  uniquenessRatio = 15,
int  speckleWindowSize = 50,
int  speckleRange = 16,
int  numberOfDisparities = 64,
int  SADWindowSize = 7,
int  minDisparity = 0,
int  preFilterCap = 63,
int  disp12MaxDiff = 0 
)

It computes the Disparity Map (see stereoDisparity).

Parameters
bestset equal true for better accuracy, equal false for save computation.
uniquenessRatioThe margin in percents by which the best (minimum) computed cost function value should win the second best value to consider the found match correct. Normally, some value within 5-15 range is good enough.
speckleWindowSizeMaximum size of smooth disparity regions to consider them noise speckles and invdalidate. Set it to 0 to disable speckle filtering. Otherwise, set it somewhere in 50-200 range.
speckleRangeMaximum disparity variation within each connected component. If you do speckle filtering, set it to some positive value, multiple of 16. Normally, 16 or 32 is good enough.
Note
Run the calibration or set all the parameters before using this method.

Definition at line 581 of file stereoCamera.cpp.

584 {
585 
586  if (this->Kleft.empty() || this->DistL.empty() || this->Kright.empty() || this->DistR.empty())
587  {
588  cout <<" Cameras are not calibrated! Run the Calibration first!" << endl;
589  return;
590  }
591 
592  if (this->imleft.empty() || this->imright.empty())
593  {
594  cout << "Images are not set! set the images first!" << endl;
595  return;
596  }
597 
598  Size img_size=this->imleft.size();
599 
600  if (cameraChanged)
601  {
602  lock_guard<mutex> lg(mtx);
603  stereoRectify(this->Kleft, this->DistL, this->Kright, this->DistR, img_size,
604  this->R, this->T, this->RLrect, this->RRrect, this->PLrect,
605  this->PRrect, this->Q, -1);
606 
607  if (!rectify)
608  {
609  this->RLrect=Mat::eye(3,3,CV_32FC1);
610  this->RRrect=Mat::eye(3,3,CV_32FC1);
611  this->PLrect=this->Kleft;
612  this->PRrect=this->Kright;
613  }
614  }
615 
616  if (cameraChanged)
617  {
618  initUndistortRectifyMap(this->Kleft, this->DistL, this->RLrect, this->PLrect,
619  img_size, CV_32FC1, this->map11, this->map12);
620  initUndistortRectifyMap(this->Kright, this->DistR, this->RRrect, this->PRrect,
621  img_size, CV_32FC1, this->map21, this->map22);
622  }
623 
624  Mat img1r, img2r;
625  remap(this->imleft, img1r, this->map11, this->map12, cv::INTER_LINEAR);
626  remap(this->imright, img2r, this->map21,this->map22, cv::INTER_LINEAR);
627 
628  imgLeftRect = img1r;
629  imgRightRect = img2r;
630 
631  Mat disp,disp8,map,dispTemp;
632 
633  bool success;
634 
635  if (use_elas)
636  {
637  success = elaswrap->compute_disparity(img1r, img2r, disp, numberOfDisparities);
638  if (success)
639  {
640  map = disp * (255.0 / numberOfDisparities);
641  //threshold(map, map, 0, 255.0, THRESH_TOZERO);
642  }
643  } else
644  {
645  int cn=this->imleft.channels();
646  #ifdef OPENCV_GREATER_2
647  Ptr<StereoSGBM> sgbm=cv::StereoSGBM::create(minDisparity,numberOfDisparities,SADWindowSize,
648  8*cn*SADWindowSize*SADWindowSize,
649  32*cn*SADWindowSize*SADWindowSize,
650  disp12MaxDiff,preFilterCap,uniquenessRatio,
651  speckleWindowSize,speckleRange,
652  best?StereoSGBM::MODE_HH:StereoSGBM::MODE_SGBM);
653  sgbm->compute(img1r, img2r, disp);
654  #else
655  StereoSGBM sgbm;
656  sgbm.preFilterCap = preFilterCap; //63
657  sgbm.SADWindowSize = SADWindowSize;
658  sgbm.P1 = 8*cn*SADWindowSize*SADWindowSize;
659  sgbm.P2 = 32*cn*SADWindowSize*SADWindowSize;
660  sgbm.minDisparity = minDisparity; //-15
661  sgbm.numberOfDisparities = numberOfDisparities;
662  sgbm.uniquenessRatio = uniquenessRatio; //22
663  sgbm.speckleWindowSize = speckleWindowSize; //100
664  sgbm.speckleRange = speckleRange; //32
665  sgbm.disp12MaxDiff = disp12MaxDiff;
666  sgbm.fullDP = best; // alg == STEREO_HH
667 
668  sgbm(img1r, img2r, disp);
669  #endif
670 
671  disp.convertTo(map, CV_32FC1, 1.0,0.0);
672  map.convertTo(map,CV_32FC1,255/(numberOfDisparities*16.));
673  //normalize(map,map, 0, 255, cv::NORM_MINMAX, CV_8UC1);
674 
675  success = true;
676  }
677 
678  if (success)
679  {
680  if (cameraChanged)
681  {
682  lock_guard<mutex> lg(mtx);
683  Mat inverseMapL(map.rows*map.cols,1,CV_32FC2);
684  Mat inverseMapR(map.rows*map.cols,1,CV_32FC2);
685 
686  for (int y=0; y<map.rows; y++)
687  {
688  for (int x=0; x<map.cols; x++)
689  {
690  inverseMapL.ptr<float>(y*map.cols+x)[0]=(float)x;
691  inverseMapL.ptr<float>(y*map.cols+x)[1]=(float)y;
692  inverseMapR.ptr<float>(y*map.cols+x)[0]=(float)x;
693  inverseMapR.ptr<float>(y*map.cols+x)[1]=(float)y;
694  }
695  }
696 
697  undistortPoints(inverseMapL,inverseMapL,this->Kleft,this->DistL,this->RLrect,this->PLrect);
698  undistortPoints(inverseMapR,inverseMapR,this->Kright,this->DistR,this->RRrect,this->PRrect);
699 
700  Mat mapperL=inverseMapL.reshape(2,map.rows);
701  Mat mapperR=inverseMapR.reshape(2,map.rows);
702  this->MapperL=mapperL;
703  this->MapperR=mapperR;
704 
705  cameraChanged = false;
706  }
707 
708  Mat x;
709  remap(map,dispTemp,this->MapperL,x,cv::INTER_LINEAR);
710  dispTemp.convertTo(disp8,CV_8U);
711 
712  if (use_elas)
713  disp.convertTo(disp, CV_16SC1, 16.0);
714  }
715 
716  lock_guard<mutex> lg(mtx);
717  this->Disparity = disp8;
718  this->Disparity16 = disp;
719 }

◆ computeWorldImage()

Mat StereoCamera::computeWorldImage ( Mat &  H)

The method returns a 3-Channels float image with the world coordinates w.r.t H reference system.

Parameters
Hthe transformation from the camera reference system to the H reference system
Returns
The 3-Channels float image with the world coordinates w.r.t H reference system.

Definition at line 2231 of file stereoCamera.cpp.

2232 {
2233  Mat worldImg(Disparity16.rows,Disparity16.cols,CV_32FC3);
2234 
2235  if(H.empty())
2236  H=H.eye(4,4,CV_64FC1);
2237 
2238  if(Disparity16.empty() || MapperL.empty() || Q.empty())
2239  {
2240  cout <<" Run computeDisparity() method first" << endl;
2241  return worldImg;
2242  }
2243 
2244  Mat dispTemp;
2245  Mat x;
2246  remap(this->Disparity16,dispTemp,this->MapperL,x,cv::INTER_LINEAR);
2247  reprojectImageTo3D(dispTemp, worldImg,this->Q,true);
2248 
2249  for(int i=0; i<worldImg.rows; i++)
2250  {
2251  for(int j=0; j<worldImg.cols; j++)
2252  {
2253  Mat RLrectTmp=this->getRLrect().t();
2254  Mat Tfake = Mat::zeros(0,3,CV_64F);
2255  Mat P(4,1,CV_64FC1);
2256  if((worldImg.data + worldImg.step * i)[j * worldImg.channels() + 2]>100)
2257  {
2258  P.at<double>(0,0)=0.0;
2259  P.at<double>(1,0)=0.0;
2260  P.at<double>(2,0)=0.0;
2261  P.at<double>(3,0)=1.0;
2262  }
2263  else
2264  {
2265  P.at<double>(0,0)=(worldImg.data + worldImg.step * i)[j * worldImg.channels() + 0];
2266  P.at<double>(1,0)=(worldImg.data + worldImg.step * i)[j * worldImg.channels() + 1];
2267  P.at<double>(2,0)=(worldImg.data + worldImg.step * i)[j * worldImg.channels() + 2];
2268  P.at<double>(3,0)=1;
2269 
2270  Mat Hrect=buildRotTras(RLrectTmp,Tfake);
2271  P=H*Hrect*P;
2272  }
2273  (worldImg.data + worldImg.step * i)[j * worldImg.channels() + 0]=(float) ((float) P.at<double>(0,0)/P.at<double>(3,0));
2274  (worldImg.data + worldImg.step * i)[j * worldImg.channels() + 1]=(float) ((float) P.at<double>(1,0)/P.at<double>(3,0));
2275  (worldImg.data + worldImg.step * i)[j * worldImg.channels() + 2]=(float) ((float) P.at<double>(2,0)/P.at<double>(3,0));
2276  }
2277  }
2278 
2279  return worldImg;
2280 }
const Mat & getRLrect() const
It returns the rotation matrix between the original left camera and the rectified left camera.

References getRLrect().

◆ drawMatches()

Mat StereoCamera::drawMatches ( )

The method returns a 3-Channels 8bit image with the image matches.

Returns
The 3-Channels 8bit image with the image matches. Call findMatch() to retrieve the keypoints first.

Definition at line 1668 of file stereoCamera.cpp.

1669 {
1670  if (this->imleftund.empty() || this->imrightund.empty())
1671  {
1672  imleftund=imleft;
1673  imrightund=imright;
1674  }
1675 
1676  Mat matchImg;
1677  vector<KeyPoint> keypoints1(InliersL.size());
1678  vector<KeyPoint> keypoints2(InliersL.size());
1679  vector<DMatch> filteredMatches(InliersL.size());
1680 
1681  for (int i=0; i<InliersL.size(); i++)
1682  {
1683  filteredMatches[i].queryIdx=i;
1684  filteredMatches[i].trainIdx=i;
1685 
1686  keypoints1[i]=cv::KeyPoint(InliersL[i],2);
1687  keypoints2[i]=cv::KeyPoint(InliersR[i],2);
1688  }
1689 
1690  cv::drawMatches(this->imleftund,keypoints1,this->imrightund,keypoints2,
1691  filteredMatches,matchImg,Scalar(0,0,255,0),Scalar(0,0,255,0));
1692 
1693  return matchImg;
1694 }

◆ essentialDecomposition()

bool StereoCamera::essentialDecomposition ( )

It decomposes the essential matrix in Rotation and Translation between the two views.

The output is stored in the private members R and T.

Definition at line 1071 of file stereoCamera.cpp.

1072 {
1073  if (E.empty())
1074  {
1075  cout << "Essential Matrix is empty! Run the estimateEssential first!" << endl;
1076  return false;
1077  }
1078 
1079  if (this->InliersL.empty())
1080  {
1081  cout << "No matches in memory! Run findMatch first!" << endl;
1082  return false;
1083  }
1084 
1085  Mat W=Mat(3,3,CV_64FC1);
1086  W.setTo(0);
1087  W.at<double>(0,0)=0;
1088  W.at<double>(0,1)=-1;
1089  W.at<double>(0,2)=0;
1090 
1091  W.at<double>(1,0)=1;
1092  W.at<double>(1,1)=0;
1093  W.at<double>(1,2)=0;
1094 
1095  W.at<double>(2,0)=0;
1096  W.at<double>(2,1)=0;
1097  W.at<double>(2,2)=1;
1098 
1099  SVD dec(E);
1100 
1101  Mat Y=Mat::eye(3,3,CV_64FC1);
1102  Y.at<double>(2,2)=0.0;
1103  E=dec.u*Y*dec.vt; // projection to the Essential Matrix space
1104 
1105  dec(E);
1106 
1107  Mat V=dec.vt;
1108  Mat U=dec.u;
1109 
1110  Mat R1=U*W*V;
1111  Mat R2=U*W.t()*V;
1112 
1113  if (determinant(R1)<0 || determinant(R2)<0)
1114  {
1115  E=-E;
1116  SVD dec2(E);
1117 
1118  V=dec2.vt;
1119  U=dec2.u;
1120 
1121  R1=U*W*V;
1122  R2=U*W.t()*V;
1123  }
1124 
1125  Mat t1=U(Range(0,3),Range(2,3));
1126  Mat t2=-t1;
1127 
1128  Mat Rnew=Mat(3,3,CV_64FC1);
1129  Rnew.setTo(0);
1130  Mat tnew=Mat(3,1,CV_64FC1);
1131 
1132  chierality(R1,R2,t1,t2,Rnew,tnew,this->InliersL,this->InliersR);
1133 
1134  Mat rvec_new=Mat::zeros(3,1,CV_64FC1);
1135  Mat rvec_exp=Mat::zeros(3,1,CV_64FC1);
1136  Rodrigues(Rnew,rvec_new);
1137  Rodrigues(R_exp,rvec_exp);
1138 
1139  Mat t_est=(tnew/norm(tnew))*norm(this->T);
1140 
1141  Mat diff_angles=rvec_exp-rvec_new;
1142  Mat diff_tran=T_exp-t_est;
1143 
1144  std::cout << "[StereoCamera] Angles Differences: " << diff_angles.at<double>(0,0) << " " <<
1145  diff_angles.at<double>(1,0) << " " <<
1146  diff_angles.at<double>(2,0) << std::endl;
1147 
1148  std::cout << "[StereoCamera] Translation Differences: " << diff_tran.at<double>(0,0) << " " <<
1149  diff_tran.at<double>(1,0) << " " <<
1150  diff_tran.at<double>(2,0) << std::endl;
1151 
1152 
1153 
1154  // Magic numbers: rvec_new are the rotation angles, only vergence (rvec_new(1,0)) is allowed to be large
1155  // t_est is the translation estimated, it can change a little bit when joint 4 of the head is moving
1156  if (fabs(diff_angles.at<double>(0,0))<0.15 && fabs(diff_angles.at<double>(1,0))<0.15 && fabs(diff_angles.at<double>(2,0))<0.15 &&
1157  fabs(diff_tran.at<double>(0,0))<0.01 && fabs(diff_tran.at<double>(1,0))<0.01 && fabs(diff_tran.at<double>(2,0))<0.01)
1158  {
1159  lock_guard<mutex> lg(mtx);
1160  this->R=Rnew;
1161  this->T=t_est;
1162  this->updatePMatrix();
1163  this->cameraChanged=true;
1164  return true;
1165  }
1166  else
1167  return false;
1168 }
void chierality(Mat &R1, Mat &R2, Mat &t1, Mat &t2, Mat &R, Mat &t, vector< Point2f > points1, vector< Point2f > points2)
It performs the chierality test: given a couple of rotation matrices, translation vectors and matches...

References chierality().

◆ estimateEssential()

void StereoCamera::estimateEssential ( )

It estimates the essential matrix (3x3) E between two views.

The output is stored in the private member E.

Definition at line 965 of file stereoCamera.cpp.

966 {
967  this->InliersL.clear();
968  this->InliersR.clear();
969 
970  if (this->PointsL.size()<10 || this->PointsL.size()<10 )
971  {
972  cout << "Not enough matches in memory! Run findMatch first!" << endl;
973  this->E=Mat(3,3,CV_64FC1);
974  return;
975  }
976 
977  updateExpectedCameraMatrices();
978  Mat F_exp=FfromP(Pleft_exp,Pright_exp);
979 
980  vector<Point2f> filteredL;
981  vector<Point2f> filteredR;
982 
983  std::cout << "[StereoCamera] " << PointsR.size() << " Matches Found." << std::endl;
984 
985  Mat pl=Mat(3,1,CV_64FC1);
986  Mat pr=Mat(3,1,CV_64FC1);
987 
988  for (size_t i=0; i<(int) PointsL.size(); i++)
989  {
990  pl.at<double>(0,0)=PointsL[i].x;
991  pl.at<double>(1,0)=PointsL[i].y;
992  pl.at<double>(2,0)=1;
993 
994  pr.at<double>(0,0)=PointsR[i].x;
995  pr.at<double>(1,0)=PointsR[i].y;
996  pr.at<double>(2,0)=1;
997 
998  Mat xrFxl=pr.t()*F_exp*pl;
999  Mat Fxl=F_exp*pl;
1000  Mat Fxr=F_exp.t()*pr;
1001 
1002  pow(xrFxl,2,xrFxl);
1003 
1004  pow(Fxl,2,Fxl);
1005 
1006  pow(Fxr,2,Fxr);
1007 
1008  Scalar den1,den2;
1009  den1=sum(Fxl);
1010  den2=sum(Fxr);
1011  double sampsonDistance=xrFxl.at<double>(0,0)/(den1.val[0]+den2.val[0]);
1012 
1013  if (sampsonDistance<0.1)
1014  {
1015  filteredL.push_back(PointsL[i]);
1016  filteredR.push_back(PointsR[i]);
1017  }
1018  }
1019 
1020  std::cout << "[StereoCamera] " << filteredL.size() << " Matches Left After Kinematics Filtering." << std::endl;
1021 
1022  vector<uchar> status;
1023  this->F=findFundamentalMat(Mat(filteredL), Mat(filteredR), status, CV_FM_8POINT, 1, 0.999);
1024 
1025  for (size_t i=0; i<(int) filteredL.size(); i++)
1026  {
1027  pl.at<double>(0,0)=filteredL[i].x;
1028  pl.at<double>(1,0)=filteredL[i].y;
1029  pl.at<double>(2,0)=1;
1030 
1031  pr.at<double>(0,0)=filteredR[i].x;
1032  pr.at<double>(1,0)=filteredR[i].y;
1033  pr.at<double>(2,0)=1;
1034 
1035  Mat xrFxl=pr.t()*F*pl;
1036  Mat Fxl=F*pl;
1037  Mat Fxr=F.t()*pr;
1038 
1039  pow(xrFxl,2,xrFxl);
1040  pow(Fxl,2,Fxl);
1041  pow(Fxr,2,Fxr);
1042 
1043  Scalar den1,den2;
1044  den1=sum(Fxl);
1045  den2=sum(Fxr);
1046 
1047  if (status[i]==1 && xrFxl.at<double>(0,0)<0.001)
1048  {
1049  InliersL.push_back(filteredL[i]);
1050  InliersR.push_back(filteredR[i]);
1051  }
1052  }
1053 
1054  std::cout << "[StereoCamera] " << InliersL.size() << " Matches Left After RANSAC Filtering." << std::endl;
1055 
1056  if (this->InliersL.size()<10 || this->InliersR.size()<10 )
1057  {
1058  InliersL.clear();
1059  InliersR.clear();
1060  cout << "Not enough matches in memory! Run findMatch first!" << endl;
1061  this->E=Mat(3,3,CV_64FC1);
1062  return;
1063  }
1064 
1065  this->F=findFundamentalMat(Mat(InliersL), Mat(InliersR),status, CV_FM_8POINT, 1, 0.999);
1066  this->E=this->Kright.t()*this->F*this->Kleft;
1067 
1068 }
Mat FfromP(Mat &P1, Mat &P2)
The function computes the fundamental matrix starting from known camera matrices.

References FfromP().

◆ FfromP()

Mat StereoCamera::FfromP ( Mat &  P1,
Mat &  P2 
)

The function computes the fundamental matrix starting from known camera matrices.

Parameters
P1a 3x4 matrix representing the camera matrix of the left view.
P2a 3x4 matrix representing the camera matrix of the right view.
Returns
a 3x3 matrix representing the fundamental matrix.

Definition at line 890 of file stereoCamera.cpp.

891 {
892  Mat F_true(3,3,CV_64FC1);
893 
894  Mat X1(2,4,CV_64FC1);
895  Mat X2(2,4,CV_64FC1);
896  Mat X3(2,4,CV_64FC1);
897 
898  Mat Y1(2,4,CV_64FC1);
899  Mat Y2(2,4,CV_64FC1);
900  Mat Y3(2,4,CV_64FC1);
901 
902 
903  for(int i=0; i<P1.rows; i++)
904  {
905  for(int j=0; j<P1.cols; j++)
906  {
907  if(i==0)
908  {
909  X2.at<double>(1,j)= P1.at<double>(i,j);
910  X3.at<double>(0,j)= P1.at<double>(i,j);
911  Y2.at<double>(1,j)= P2.at<double>(i,j);
912  Y3.at<double>(0,j)= P2.at<double>(i,j);
913  }
914  if(i==1)
915  {
916  X1.at<double>(0,j)= P1.at<double>(i,j);
917  X3.at<double>(1,j)= P1.at<double>(i,j);
918  Y1.at<double>(0,j)= P2.at<double>(i,j);
919  Y3.at<double>(1,j)= P2.at<double>(i,j);
920  }
921 
922  if(i==2)
923  {
924  X1.at<double>(1,j)= P1.at<double>(i,j);
925  X2.at<double>(0,j)= P1.at<double>(i,j);
926  Y1.at<double>(1,j)= P2.at<double>(i,j);
927  Y2.at<double>(0,j)= P2.at<double>(i,j);
928 
929  }
930  }
931  }
932 
933 
934  std::vector<Mat> MatX;
935  std::vector<Mat> MatY;
936 
937  MatX.push_back(X1);
938  MatX.push_back(X2);
939  MatX.push_back(X3);
940 
941  MatY.push_back(Y1);
942  MatY.push_back(Y2);
943  MatY.push_back(Y3);
944 
945 
946  for(int i=0; i<F_true.rows; i++)
947  {
948  for(int j=0; j<F_true.cols; j++)
949  {
950  Mat X=MatX[i];
951  Mat Y=MatY[j];
952 
953  Mat concatenated;
954 
955  cv::vconcat(X,Y,concatenated);
956 
957  F_true.at<double>(j,i)=cv::determinant(concatenated);
958  }
959  }
960 
961  return F_true;
962 }

Referenced by estimateEssential().

◆ findMatch()

Mat StereoCamera::findMatch ( bool  visualize = false,
double  displacement = 20.0,
double  radius = 200.0 
)

It finds matches between two images.

SIFT detector and descriptor is used.

Note
Run setImages and indistortImages methods before using this method.
Parameters
visualizetrue if you want to visualize matches between images
displacementmaximum pixel displacement between first and second camera
radiusmaximum radius between the first candidate match and the second one

Definition at line 750 of file stereoCamera.cpp.

751 {
752  if (this->imleftund.empty() || this->imrightund.empty())
753  {
754  imleftund=imleft;
755  imrightund=imright;
756  }
757 
758  this->PointsL.clear();
759  this->PointsR.clear();
760 
761  this->InliersL.clear();
762  this->InliersR.clear();
763 
764  Mat grayleft(imleftund.rows,imleftund.cols, CV_8UC1);
765  imleftund.convertTo(grayleft,CV_8UC1);
766 
767  Mat grayright(imrightund.rows,imrightund.cols,CV_8UC1);
768  imrightund.convertTo(grayright,CV_8UC1);
769 
770  vector<KeyPoint> keypoints1,keypoints2;
771  Mat descriptors1,descriptors2;
772 
773 #ifdef OPENCV_GREATER_2
774  Ptr<xfeatures2d::SIFT> sift=xfeatures2d::SIFT::create();
775  yAssert(sift!=NULL);
776 
777  sift->detect(grayleft,keypoints1);
778  sift->compute(grayleft,keypoints1,descriptors1);
779 
780  sift->detect(grayright,keypoints2);
781  sift->compute(grayright,keypoints2,descriptors2);
782 #else
783  Ptr<cv::FeatureDetector> detector=cv::FeatureDetector::create("SIFT");
784  Ptr<cv::DescriptorExtractor> descriptorExtractor=cv::DescriptorExtractor::create("SIFT");
785 
786  yAssert(detector!=NULL);
787  yAssert(descriptorExtractor!=NULL);
788 
789  detector->detect(grayleft,keypoints1);
790  descriptorExtractor->compute(grayleft,keypoints1,descriptors1);
791 
792  detector->detect(grayright,keypoints2);
793  descriptorExtractor->compute(grayright,keypoints2,descriptors2);
794 #endif
795 
796  cv::BFMatcher descriptorMatcher;
797  vector<DMatch> filteredMatches;
798  crossCheckMatching(descriptorMatcher,descriptors1,descriptors2,filteredMatches,radius);
799 
800  for (size_t i=0; i<filteredMatches.size(); i++)
801  {
802  Point2f pointL=keypoints1[filteredMatches[i].queryIdx].pt;
803  Point2f pointR=keypoints2[filteredMatches[i].trainIdx].pt;
804  if (fabs(pointL.y-pointR.y)<displacement)
805  {
806  this->PointsR.push_back(pointR);
807  this->PointsL.push_back(pointL);
808  }
809  }
810 
811  Mat matchImg;
812  if (visualize)
813  cv::drawMatches(this->imleftund,keypoints1,this->imrightund,keypoints2,
814  filteredMatches,matchImg,Scalar(0,0,255,0),Scalar(0,0,255,0));
815 
816  return matchImg;
817 }

◆ fromRectifiedToOriginal()

Point2f StereoCamera::fromRectifiedToOriginal ( int  u,
int  v,
int  camera 
)

Given the u,v pixel coordinates in the rectified image the method returns the position of the pixel in the non-rectified frame.

Parameters
uthe x pixel coordinate in the rectified image.
vthe y pixel coordinate in the rectified image.
camcam=1 for left image, cam=2 for right image.
Returns
the pixel position in the non-rectified image.

Definition at line 722 of file stereoCamera.cpp.

723 {
724  cv::Point2f originalPoint;
725 
726 
727  if(u>=map11.rows || u<0 || v>=map12.cols || v< 0)
728  {
729  originalPoint.x=0;
730  originalPoint.y=0;
731  return originalPoint;
732  }
733  if(camera==LEFT)
734  {
735  originalPoint.x=map11.ptr<float>(v)[u];
736  originalPoint.y=map12.ptr<float>(v)[u];
737  }
738  else
739  {
740  originalPoint.x=map21.ptr<float>(v)[u];
741  originalPoint.y=map22.ptr<float>(v)[u];
742  }
743 
744 
745  return originalPoint;
746 
747 }

◆ getDisparity()

const Mat & StereoCamera::getDisparity ( ) const

It returns the disparity image.

Returns
the disparity image computed via computeDisparity(). The image is 8 bit unsigned.

Definition at line 533 of file stereoCamera.cpp.

533  {
534  return this->Disparity;
535 }

◆ getDisparity16()

const Mat & StereoCamera::getDisparity16 ( ) const

It returns the disparity image.

Returns
the disparity image computed via computeDisparity(). The image is 16 bit signed.

Definition at line 538 of file stereoCamera.cpp.

538  {
539  return this->Disparity16;
540 }

Referenced by metricTriangulation().

◆ getDistCoeffLeft()

const Mat & StereoCamera::getDistCoeffLeft ( ) const

It returns the 5x1 left distortion coefficients.

Returns
5x1 left distortion coefficients.

Definition at line 2283 of file stereoCamera.cpp.

2284 {
2285  return this->DistL;
2286 }

◆ getDistCoeffRight()

const Mat & StereoCamera::getDistCoeffRight ( ) const

It returns the 5x1 right distortion coefficients.

Returns
5x1 right distortion coefficients.

Definition at line 2289 of file stereoCamera.cpp.

2290 {
2291  return this->DistR;
2292 }

◆ getDistortedPixel()

Point2f StereoCamera::getDistortedPixel ( int  u,
int  v,
int  cam = 1 
)

Given the u,v pixel coordinates in the undistorted image the method returns the original position of the pixel in the distorted frame.

Parameters
uthe x pixel coordinate in the undistorted image.
vthe y pixel coordinate in the undistorted image.
camcam=1 for left image, cam=2 for right image.
Returns
the pixel position in the distorted image.

Definition at line 2305 of file stereoCamera.cpp.

2306 {
2307  Point2f distortedPixel;
2308  Mat MapperX,MapperY;
2309 
2310  if(cam==LEFT)
2311  {
2312  MapperX=mapxL;
2313  MapperY=mapyL;
2314  }
2315  else
2316  {
2317  MapperX=mapxR;
2318  MapperY=mapyR;
2319  }
2320  distortedPixel.x=MapperX.ptr<float>(v)[u];
2321  distortedPixel.y=MapperY.ptr<float>(v)[u];
2322 
2323  return distortedPixel;
2324 }

◆ getFundamental()

const Mat & StereoCamera::getFundamental ( ) const

It returns the 3x3 fundamental matrix.

Returns
3x3 fundamental matrix.

Definition at line 1495 of file stereoCamera.cpp.

1495  {
1496  return this->F;
1497 }

◆ getImLeft()

const Mat & StereoCamera::getImLeft ( ) const

It returns the left (first) image.

Returns
the left (first) image.

Definition at line 523 of file stereoCamera.cpp.

523  {
524  return this->imleft;
525 }

Referenced by updateMappings().

◆ getImLeftUnd()

const Mat & StereoCamera::getImLeftUnd ( ) const

It returns the left undistorted image.

Returns
the left undistorted image.

Definition at line 1490 of file stereoCamera.cpp.

1490  {
1491  return this->imleftund;
1492 }

◆ getImRight()

const Mat & StereoCamera::getImRight ( ) const

It returns the right (second) image.

Returns
the right (second) image.

Definition at line 528 of file stereoCamera.cpp.

528  {
529  return this->imright;
530 }

◆ getImRightUnd()

const Mat & StereoCamera::getImRightUnd ( ) const

It returns the right undistorted image.

Returns
the right undistorted image.

Definition at line 1500 of file stereoCamera.cpp.

1500  {
1501  return this->imrightund;
1502 }

◆ getKleft()

const Mat & StereoCamera::getKleft ( ) const

It returns the 3x3 left camera matrix.

Returns
3x3 left camera matrix.

Definition at line 96 of file stereoCamera.cpp.

96  {
97  return this->Kleft;
98 }

◆ getKright()

const Mat & StereoCamera::getKright ( ) const

It returns the 3x3 right camera matrix.

Returns
3x3 right camera matrix.

Definition at line 101 of file stereoCamera.cpp.

101  {
102  return this->Kright;
103 }

◆ getLRectified()

const Mat & StereoCamera::getLRectified ( ) const

The method returns the first rectified image.

Returns
The first rectified image.

Definition at line 2161 of file stereoCamera.cpp.

2162 {
2163  return this->imgLeftRect;
2164 }

◆ getMapperL()

const Mat & StereoCamera::getMapperL ( ) const

It returns the mapping between the original left camera and the rectified left camera.

Returns
a 16 bit signed 2 channel image containing the mapping from the original left camera to the rectified left camera.

Definition at line 1915 of file stereoCamera.cpp.

1915  {
1916  return this->MapperL;
1917 }

Referenced by metricTriangulation().

◆ getMapperR()

const Mat & StereoCamera::getMapperR ( ) const

It returns the mapping between the original right camera and the rectified right camera.

Returns
a 16 bit signed 2 channel image containing the mapping from the original right camera to the rectified right camera.

Definition at line 1920 of file stereoCamera.cpp.

1920  {
1921  return this->MapperR;
1922 }

◆ getMatchLeft()

const vector< Point2f > & StereoCamera::getMatchLeft ( ) const

It returns the pixel coordinates of the matches in the left image.

Returns
pixel coordinates of the matches in the left image.

Definition at line 106 of file stereoCamera.cpp.

106  {
107  return this->InliersL;
108 }

◆ getMatchRight()

const vector< Point2f > & StereoCamera::getMatchRight ( ) const

It returns the pixel coordinates of the matches in the right image.

Returns
pixel coordinates of the matches in the right image.

Definition at line 111 of file stereoCamera.cpp.

111  {
112  return this->InliersR;
113 }

◆ getQ()

const Mat & StereoCamera::getQ ( ) const

It returns the 4x4 disparity-to-depth mapping matrix.

Returns
4x4 disparity-to-depth mapping matrix.

Definition at line 543 of file stereoCamera.cpp.

543  {
544  return this->Q;
545 }

◆ getRLrect()

const Mat & StereoCamera::getRLrect ( ) const

It returns the rotation matrix between the original left camera and the rectified left camera.

Returns
3x3 rotation matrix between the original left camera and the rectified left camera.

Definition at line 1905 of file stereoCamera.cpp.

1905  {
1906  return this->RLrect;
1907 }

Referenced by computeWorldImage(), metricTriangulation(), and triangulateKnownDisparity().

◆ getRotation()

const Mat & StereoCamera::getRotation ( ) const

It returns the rotation matrix between the two cameras.

Returns
3x3 rotation matrix between the first and the second camera.

Definition at line 1596 of file stereoCamera.cpp.

1596  {
1597  return this->R;
1598 }

Referenced by horn().

◆ getRRectified()

const Mat & StereoCamera::getRRectified ( ) const

The method returns the second rectified image.

Returns
The second rectified image.

Definition at line 2167 of file stereoCamera.cpp.

2168 {
2169  return this->imgRightRect;
2170 }

◆ getRRrect()

const Mat & StereoCamera::getRRrect ( ) const

It returns the rotation matrix between the original right camera and the rectified right camera.

Returns
3x3 rotation matrix between the original right camera and the rectified right camera.

Definition at line 1910 of file stereoCamera.cpp.

1910  {
1911  return this->RRrect;
1912 }

◆ getTranslation()

const Mat & StereoCamera::getTranslation ( ) const

It returns the translation vector between the two cameras.

Returns
3x1 translation matrix between the first and the second camera.

Definition at line 1591 of file stereoCamera.cpp.

1591  {
1592  return this->T;
1593 }

◆ horn()

void StereoCamera::horn ( Mat &  K1,
Mat &  K2,
vector< Point2f > &  Points1,
vector< Point2f > &  Points2,
Mat &  Rot,
Mat &  Tras 
)

It performs the horn relative orientations algorithm i.e.

it estimates the motion from one camera to another one using a initial guess. A good initial guess can be obtained using the essentialDecomposition method.

Parameters
K13x3 matrix with intrinsic parameters of the first camera
K23x3 matrix with intrinsic parameters of the second camera
Points1matches in the first image
Points2matches in the second image
Rotinitial rotation (3x3 matrix) guess. The new output rotation is stored here
Trasinitial translation (3x1 matrix) guess. The new output translation is stored here

Definition at line 1697 of file stereoCamera.cpp.

1697  {
1698  double prevres = 1E40;
1699  double res = 1E39;
1700  double vanishing = 1E-16;
1701  Tras=Tras/norm(Tras);
1702 
1703  normalizePoints(K1,K2,PointsL,PointsR);
1704  int iters=0;
1705  Mat B(3,3,CV_64FC1);
1706  Mat C(3,3,CV_64FC1);
1707  Mat D(3,3,CV_64FC1);
1708  Mat cs(3,1,CV_64FC1);
1709  Mat ds(3,1,CV_64FC1);
1710  Mat r1(3,1,CV_64FC1);
1711  Mat r2(3,1,CV_64FC1);
1712 
1713  while ( (prevres - res > vanishing) ) {
1714  iters = iters+1;
1715 
1716  B.setTo(0);
1717  C.setTo(0);
1718  D.setTo(0);
1719  cs.setTo(0);
1720  ds.setTo(0);
1721 
1722  prevres=res;
1723  res=0;
1724  for(int i=0; i<(int) PointsL.size(); i++) {
1725 
1726  r1.at<double>(0,0)=PointsL[i].x;
1727  r1.at<double>(1,0)=PointsL[i].y;
1728  r1.at<double>(2,0)=1;
1729  r1=r1/norm(r1);
1730 
1731  r2.at<double>(0,0)=PointsR[i].x;
1732  r2.at<double>(1,0)=PointsR[i].y;
1733  r2.at<double>(2,0)=1;
1734  r2=r2/norm(r2);
1735 
1736 
1737  Mat r1p= Rot*r1;
1738 
1739  Mat ci=r1p.cross(r2);
1740  Mat di=r1p.cross(r2.cross(Tras));
1741  Mat si=Tras.t()*ci;
1742 
1743 
1744  B=B+(ci*di.t());
1745  D=D+(di*di.t());
1746  C=C+(ci*ci.t());
1747 
1748  cs=cs+ (si.at<double>(0,0)*ci);
1749  ds=ds+ (si.at<double>(0,0)*di);
1750 
1751  Mat residual=Tras.t()*ci*ci.t()*Tras;
1752  res=res+residual.at<double>(0,0);
1753 
1754  }
1755 
1756  Mat L(7,7,CV_64FC1);
1757  L.setTo(0);
1758 
1759  for(int i=0; i<3; i++)
1760  for(int j=0; j<3; j++)
1761  L.at<double>(i,j)=C.at<double>(i,j);
1762 
1763  for(int i=0; i<3; i++)
1764  for(int j=3; j<6; j++)
1765  L.at<double>(i,j)=B.at<double>(i,j-3);
1766 
1767  for(int i=0; i<3; i++)
1768  L.at<double>(i,6)=Tras.at<double>(i,0);
1769 
1770 
1771  for(int i=3; i<6; i++)
1772  for(int j=0; j<3; j++) {
1773  Mat Bt=B.t();
1774  L.at<double>(i,j)=Bt.at<double>(i-3,j);
1775 
1776  }
1777 
1778 
1779  for(int i=3; i<6; i++)
1780  for(int j=3; j<6; j++)
1781  L.at<double>(i,j)=D.at<double>(i-3,j-3);
1782 
1783  for(int j=0; j<3; j++) {
1784  Mat Trast=Tras.t();
1785  L.at<double>(6,j)=Trast.at<double>(0,j);
1786  }
1787 
1788 
1789  Mat Y(7,1,CV_64FC1);
1790  Y.setTo(0);
1791 
1792  for(int j=0; j<3; j++)
1793  Y.at<double>(j,0)=-cs.at<double>(j,0);
1794 
1795  for(int j=3; j<6; j++)
1796  Y.at<double>(j,0)=-ds.at<double>(j-3,0);
1797 
1798  Mat Linv=L.inv();
1799  Mat result=Linv*Y;
1800  Tras=Tras+result(Range(0,3),Range(0,1));
1801  Tras=Tras/norm(Tras);
1802 
1803  Mat q(4,1,CV_64FC1);
1804 
1805  Mat temp=result(Range(3,6),Range(0,1));
1806  q.at<double>(0,0)= sqrt(1-(0.25* norm(temp)*norm(temp)));
1807  q.at<double>(1,0)= 0.5*result.at<double>(3,0);
1808  q.at<double>(2,0)= 0.5*result.at<double>(4,0);
1809  q.at<double>(3,0)= 0.5*result.at<double>(5,0);
1810 
1811 
1812  Mat deltaR(3,3,CV_64FC1);
1813  getRotation(q,deltaR);
1814 
1815  Rot=deltaR*Rot;
1816 
1817  SVD dec(Rot);
1818 
1819  Mat Id = Mat::eye(3, 3, CV_64F);
1820 
1821  Mat Vt=dec.vt;
1822  Mat U=dec.u;
1823 
1824  Rot=U*Id*Vt;
1825  }
1826 
1827 
1828 
1829 }
const Mat & getRotation() const
It returns the rotation matrix between the two cameras.

References getRotation().

Referenced by hornRelativeOrientations().

◆ hornRelativeOrientations()

void StereoCamera::hornRelativeOrientations ( )

It performs the horn relative orientations, all the parameters are assumed initialized in the StereoCamera object.

The new output Rotation and Translation matrices are stored in the R and T members.

Definition at line 1635 of file stereoCamera.cpp.

1635  {
1636 
1637  if(this->PointsL.size()<10 || this->PointsR.size()<10) {
1638  cout << "No matches found! Run findMatch fist!" << endl;
1639  return;
1640  }
1641 
1642 
1643  if(this->Kleft.empty() || this->Kright.empty() || this->R.empty() || this->T.empty()) {
1644  cout << "Cameras are empty, run Calibration first" << endl;
1645  return;
1646  }
1647 
1648  if(InliersL.empty()) {
1649  InliersL=PointsL;
1650  InliersR=PointsR;
1651  }
1652 
1653  Mat Rot=this->R.clone();
1654  Mat Tras=this->T.clone();
1655  horn(this->Kleft,this->Kright,this->InliersL,this->InliersR,Rot,Tras);
1656 
1657 
1658  this->R=Rot.clone();
1659  this->Rinit=Rot.clone();
1660 
1661  this->T=Tras/norm(Tras)*norm(T);
1662  this->Tinit=Tras/norm(Tras)*norm(Tinit);
1663 
1664  this->updatePMatrix();
1665 }
void horn(Mat &K1, Mat &K2, vector< Point2f > &Points1, vector< Point2f > &Points2, Mat &Rot, Mat &Tras)
It performs the horn relative orientations algorithm i.e.

References horn().

◆ initELAS()

void StereoCamera::initELAS ( yarp::os::ResourceFinder &  rf)

Initialization of ELAS parameters.

Parameters
rfThe ResourceFinder mechanism is used to set the parameters either to the default value or to the value passed by the user via command line. See the documentation of the SFM module to get the list of parameters that are processed by this initialization function.

Definition at line 164 of file stereoCamera.cpp.

165 {
166  use_elas = true;
167 
168  string elas_string = rf.check("elas_setting",Value("ROBOTICS")).asString();
169 
170  double disp_scaling_factor = rf.check("disp_scaling_factor",Value(1.0)).asFloat64();
171 
172  elaswrap = new elasWrapper(disp_scaling_factor, elas_string);
173 
174 
175  if (rf.check("elas_subsampling"))
176  elaswrap->set_subsampling(true);
177 
178  if (rf.check("elas_add_corners"))
179  elaswrap->set_add_corners(true);
180 
181 
182  elaswrap->set_ipol_gap_width(40);
183  if (rf.check("elas_ipol_gap_width"))
184  elaswrap->set_ipol_gap_width(rf.find("elas_ipol_gap_width").asInt32());
185 
186 
187  if (rf.check("elas_support_threshold"))
188  elaswrap->set_support_threshold(rf.find("elas_support_threshold").asFloat64());
189 
190  if(rf.check("elas_gamma"))
191  elaswrap->set_gamma(rf.find("elas_gamma").asFloat64());
192 
193  if (rf.check("elas_sradius"))
194  elaswrap->set_sradius(rf.find("elas_sradius").asFloat64());
195 
196  if (rf.check("elas_match_texture"))
197  elaswrap->set_match_texture(rf.find("elas_match_texture").asInt32());
198 
199  if (rf.check("elas_filter_median"))
200  elaswrap->set_filter_median(rf.find("elas_filter_median").asBool());
201 
202  if (rf.check("elas_filter_adaptive_mean"))
203  elaswrap->set_filter_adaptive_mean(rf.find("elas_filter_adaptive_mean").asBool());
204 
205  cout << endl << "ELAS parameters:" << endl << endl;
206 
207  cout << "disp_scaling_factor: " << disp_scaling_factor << endl;
208 
209  cout << "setting: " << elas_string << endl;
210 
211  cout << "postprocess_only_left: " << elaswrap->get_postprocess_only_left() << endl;
212  cout << "subsampling: " << elaswrap->get_subsampling() << endl;
213 
214  cout << "add_corners: " << elaswrap->get_add_corners() << endl;
215 
216  cout << "ipol_gap_width: " << elaswrap->get_ipol_gap_width() << endl;
217 
218  cout << "support_threshold: " << elaswrap->get_support_threshold() << endl;
219  cout << "gamma: " << elaswrap->get_gamma() << endl;
220  cout << "sradius: " << elaswrap->get_sradius() << endl;
221 
222  cout << "match_texture: " << elaswrap->get_match_texture() << endl;
223 
224  cout << "filter_median: " << elaswrap->get_filter_median() << endl;
225  cout << "filter_adaptive_mean: " << elaswrap->get_filter_adaptive_mean() << endl;
226 
227  cout << endl;
228 }

◆ metricTriangulation() [1/2]

Point3f StereoCamera::metricTriangulation ( Point2f &  point1,
double  thMeters = 10 
)

It performs the metric triangulation given the pixel coordinates on the first image.

Run compute disparity before using this method.

Parameters
point1the pixel coordinates in the first image.
Returns
a metric 3D point w.r.t. the first camera reference system.

Definition at line 1954 of file stereoCamera.cpp.

1954  {
1955  lock_guard<mutex> lg(mtx);
1956 
1957  if(Q.empty() || Disparity16.empty()) {
1958  cout << "Run computeDisparity() method first!" << endl;
1959  Point3f point;
1960  point.x=0.0;
1961  point.y=0.0;
1962  point.z=0.0;
1963  return point;
1964  }
1965 
1966  int u=(int) point1.x;
1967  int v=(int) point1.y;
1968  Point3f point;
1969 
1970  // Mapping from Rectified Cameras to Original Cameras
1971  Mat Mapper=this->getMapperL();
1972 
1973  if(Mapper.empty()) {
1974  point.x=0.0;
1975  point.y=0.0;
1976  point.z=0.0;
1977  return point;
1978  }
1979 
1980  float usign=Mapper.ptr<float>(v)[2*u];
1981  float vsign=Mapper.ptr<float>(v)[2*u+1];
1982 
1983  u=cvRound(usign);
1984  v=cvRound(vsign);
1985 
1986  const cv::Mat& disp16=this->getDisparity16();
1987 
1988  if(u<0 || u>=disp16.size().width || v<0 || v>=disp16.size().height) {
1989  point.x=0.0;
1990  point.y=0.0;
1991  point.z=0.0;
1992  return point;
1993  }
1994 
1995  CvScalar scal=cvGet2D(&disp16,v,u);
1996  double disparity=scal.val[0]/16.0;
1997  float w= (float) ((float) disparity*Q.at<double>(3,2)) + ((float)Q.at<double>(3,3));
1998  point.x= (float)((float) (usign+1)*Q.at<double>(0,0)) + ((float) Q.at<double>(0,3));
1999  point.y=(float)((float) (vsign+1)*Q.at<double>(1,1)) + ((float) Q.at<double>(1,3));
2000  point.z=(float) Q.at<double>(2,3);
2001 
2002  point.x=point.x/w;
2003  point.y=point.y/w;
2004  point.z=point.z/w;
2005 
2006  // discard points far more than thMeters meters or with not valid disparity (<0)
2007  if(point.z>thMeters || point.z<0) {
2008  point.x=0.0;
2009  point.y=0.0;
2010  point.z=0.0;
2011  }
2012  else {
2013  Mat P(3,1,CV_64FC1);
2014  P.at<double>(0,0)=point.x;
2015  P.at<double>(1,0)=point.y;
2016  P.at<double>(2,0)=point.z;
2017 
2018  // Rototranslation from rectified camera to original camera
2019  P=this->getRLrect().t()*P;
2020 
2021  point.x=(float) P.at<double>(0,0);
2022  point.y=(float) P.at<double>(1,0);
2023  point.z=(float) P.at<double>(2,0);
2024  }
2025 
2026  return point;
2027 }
const Mat & getDisparity16() const
It returns the disparity image.
const Mat & getMapperL() const
It returns the mapping between the original left camera and the rectified left camera.

References getDisparity16(), and getMapperL().

◆ metricTriangulation() [2/2]

Point3f StereoCamera::metricTriangulation ( Point2f &  point1,
Mat &  H,
double  thMeters = 10 
)

It performs the metric triangulation given the pixel coordinates on the first image.

The 3D Point is w.r.t the system defined by the parameter H. Run compute disparity before using this method.

Parameters
point1the pixel coordinates in the first image.
Hthe 4x4 rototranslation matrix of the system.
Returns
a metric 3D point w.r.t. the reference system defined by H.

Definition at line 2030 of file stereoCamera.cpp.

2030  {
2031  lock_guard<mutex> lg(mtx);
2032 
2033  if(H.empty())
2034  H=H.eye(4,4,CV_64FC1);
2035 
2036  if(Q.empty() || Disparity16.empty()) {
2037  cout << "Run computeDisparity() method first!" << endl;
2038  Point3f point;
2039  point.x=0.0;
2040  point.y=0.0;
2041  point.z=0.0;
2042  return point;
2043  }
2044 
2045  int u=(int) point1.x; // matrix starts from (0,0), pixels from (1,1)
2046  int v=(int) point1.y;
2047  Point3f point;
2048 
2049 
2050  // Mapping from Rectified Cameras to Original Cameras
2051  Mat Mapper=this->getMapperL();
2052 
2053  if(Mapper.empty()) {
2054  point.x=0.0;
2055  point.y=0.0;
2056  point.z=0.0;
2057  return point;
2058  }
2059 
2060  float usign=Mapper.ptr<float>(v)[2*u];
2061  float vsign=Mapper.ptr<float>(v)[2*u+1];
2062 
2063  u=cvRound(usign);
2064  v=cvRound(vsign);
2065 
2066  const cv::Mat& disp16=this->getDisparity16();
2067 
2068  if(u<0 || u>=disp16.size().width || v<0 || v>=disp16.size().height) {
2069  point.x=0.0;
2070  point.y=0.0;
2071  point.z=0.0;
2072  return point;
2073  }
2074 
2075  CvScalar scal=cvGet2D(&disp16,v,u);
2076  double disparity=scal.val[0]/16.0;
2077  float w= (float) ((float) disparity*Q.at<double>(3,2)) + ((float)Q.at<double>(3,3));
2078  point.x= (float)((float) (usign+1)*Q.at<double>(0,0)) + ((float) Q.at<double>(0,3));
2079  point.y=(float)((float) (vsign+1)*Q.at<double>(1,1)) + ((float) Q.at<double>(1,3));
2080  point.z=(float) Q.at<double>(2,3);
2081 
2082  point.x=point.x/w;
2083  point.y=point.y/w;
2084  point.z=point.z/w;
2085 
2086  // discard points far more than thMeters meters or with not valid disparity (<0)
2087  if(point.z>thMeters || point.z<0) {
2088  point.x=0.0;
2089  point.y=0.0;
2090  point.z=0.0;
2091  return point;
2092  }
2093 
2094  Mat RLrectTmp=this->getRLrect().t();
2095  Mat Tfake = Mat::zeros(0,3,CV_64F);
2096  Mat P(4,1,CV_64FC1);
2097  P.at<double>(0,0)=point.x;
2098  P.at<double>(1,0)=point.y;
2099  P.at<double>(2,0)=point.z;
2100  P.at<double>(3,0)=1;
2101 
2102  Mat Hrect=buildRotTras(RLrectTmp,Tfake);
2103  P=H*Hrect*P;
2104 
2105  point.x=(float) ((float) P.at<double>(0,0)/P.at<double>(3,0));
2106  point.y=(float) ((float) P.at<double>(1,0)/P.at<double>(3,0));
2107  point.z=(float) ((float) P.at<double>(2,0)/P.at<double>(3,0));
2108 
2109  return point;
2110 }

References getDisparity16(), getMapperL(), and getRLrect().

◆ projectPoints3D()

vector< Point2f > StereoCamera::projectPoints3D ( string  camera,
vector< Point3f > &  points3D,
Mat &  H 
)

The method returns the 2D projection of a set of 3D points in the cartesian space to the specified camera.

Parameters
camera"left" or "right" camera
point3Dthe list of the 3D position in the reference frame H
Hthe transformation from the camera reference system to the H reference system
Returns
The 2D positions.

Definition at line 2173 of file stereoCamera.cpp.

2174 {
2175  vector<Point2f> points2D;
2176 
2177  if(this->Kleft.empty() || this->DistL.empty() || this->Kright.empty() || this->DistR.empty()) {
2178  cout <<" Cameras are not calibrated! Run the Calibration first!" << endl;
2179  return points2D;
2180  }
2181 
2182  if(H.empty())
2183  H=H.eye(4,4,CV_64FC1);
2184 
2185  lock_guard<mutex> lg(mtx);
2186  for (int i=0; i<points3D.size(); i++)
2187  {
2188  // Apply inverse Trasformation for each point
2189  Point3f point=points3D[i];
2190  Mat P(4,1,CV_64FC1);
2191  P.at<double>(0,0)=point.x;
2192  P.at<double>(1,0)=point.y;
2193  P.at<double>(2,0)=point.z;
2194  P.at<double>(3,0)=1;
2195 
2196  P=H.inv()*P;
2197 
2198  point.x=(float) ((float) P.at<double>(0,0)/P.at<double>(3,0));
2199  point.y=(float) ((float) P.at<double>(1,0)/P.at<double>(3,0));
2200  point.z=(float) ((float) P.at<double>(2,0)/P.at<double>(3,0));
2201 
2202  points3D[i]=point;
2203  }
2204 
2205  Mat cameraMatrix, distCoeff, rvec, tvec;
2206  rvec=Mat::zeros(3,1,CV_64FC1);
2207 
2208  if(camera=="left")
2209  {
2210  cameraMatrix=this->Kleft;
2211  distCoeff=this->DistL;
2212  Mat R2= Mat::eye(3,3,CV_64FC1);
2213  Rodrigues(R2,rvec);
2214  tvec=Mat::zeros(3,1,CV_64FC1);
2215  }
2216  else
2217  {
2218  cameraMatrix=this->Kright;
2219  distCoeff=this->DistR;
2220  Mat R2= this->R;
2221  Rodrigues(R2,rvec);
2222  tvec=this->T;
2223  }
2224 
2225  Mat points3Mat(points3D);
2226  projectPoints(points3Mat,rvec,tvec,cameraMatrix,distCoeff,points2D);
2227  return points2D;
2228 }

◆ rectifyImages()

void StereoCamera::rectifyImages ( )

The method rectifies the two images: it transform each image plane such that pairs conjugate epipolar lines become collinear and parallel to one of the image axes (i.e.

there is 0 disparity on the Y axis).

Definition at line 548 of file stereoCamera.cpp.

549 {
550  if(this->Kleft.empty() || this->DistL.empty() || this->Kright.empty() || this->DistR.empty()) {
551  cout <<" Cameras are not calibrated! Run the Calibration first!" << endl;
552  return;
553  }
554  if(this->imleft.empty() || this->imright.empty()) {
555  cout << "Images are not set! set the images first!" << endl;
556  return;
557  }
558 
559  Size img_size = this->imleft.size();
560 
561  if(cameraChanged)
562  {
563  lock_guard<mutex> lg(mtx);
564  stereoRectify(this->Kleft, this->DistL, this->Kright, this->DistR, img_size, this->R, this->T, this->RLrect, this->RRrect, this->PLrect, this->PRrect, this->Q, -1);
565  }
566 
567  if(cameraChanged)
568  {
569  initUndistortRectifyMap(this->Kleft, this->DistL, this->RLrect, this->PLrect, img_size, CV_32FC1, this->map11, this->map12);
570  initUndistortRectifyMap(this->Kright, this->DistR, this->RRrect, this->PRrect, img_size, CV_32FC1, this->map21, this->map22);
571  }
572 
573  remap(this->imleft, this->imgLeftRect, this->map11, this->map12, cv::INTER_LINEAR);
574  remap(this->imright, this->imgRightRect, this->map21,this->map22, cv::INTER_LINEAR);
575 
576  cameraChanged = false;
577 
578 }

◆ remapDisparity()

cv::Mat StereoCamera::remapDisparity ( cv::Mat  disp)

Remaps the disparity map on the basis of the mapping previously computed.

Parameters
dispthe disparity map to be remapped
Returns
the remapped disparity map

Definition at line 2423 of file stereoCamera.cpp.

2424 {
2425 
2426  cv::Mat remapped;
2427 
2428  Mat x;
2429  remap(disp,remapped,this->MapperL,x,cv::INTER_LINEAR);
2430  remapped.convertTo(remapped,CV_8U);
2431 
2432  return remapped;
2433 
2434 }

◆ saveCalibration()

void StereoCamera::saveCalibration ( string  extrinsicFilePath,
string  intrinsicFilePath 
)

It saves the calibration.

Parameters
extrinsicFilePaththe path of the extrinsic parameters file
intrinsicFilePaththe path of the intrinsic parameters file

Definition at line 470 of file stereoCamera.cpp.

470  {
471 
472  if( Kleft.empty() || Kright.empty() || DistL.empty() || DistR.empty() || R.empty() || T.empty()) {
473  std::cout << "Error: cameras are not calibrated! Run the calibration or set intrinsic and extrinsic parameters" << std::endl;
474  return;
475  }
476 
477  FileStorage fs(intrinsicFilePath+".yml", FileStorage::WRITE);
478  if( fs.isOpened() )
479  {
480  fs << "M1" << Kleft << "D1" << DistL << "M2" << Kright << "D2" << DistR;
481  fs.release();
482  }
483  else
484  cout << "Error: can not save the intrinsic parameters\n";
485 
486  fs.open(extrinsicFilePath+".yml", FileStorage::WRITE);
487  if( fs.isOpened() )
488  {
489  fs << "R" << R << "T" << T <<"Q" << Q;
490  fs.release();
491  }
492  else
493  std::cout << "Error: can not save the intrinsic parameters" << std::endl;
494 
495  ofstream fout((intrinsicFilePath+".ini").c_str());
496 
497  // Left Eye
498  fout << "[left]" << endl;
499  fout << "fx " << Kleft.at<double>(0,0) << endl;
500  fout << "fy " << Kleft.at<double>(1,1) << endl;
501  fout << "cx " << Kleft.at<double>(0,2) << endl;
502  fout << "cy " << Kleft.at<double>(1,2) << endl;
503  fout << "k1 " << DistL.at<double>(0,0) << endl;
504  fout << "k2 " << DistL.at<double>(1,0) << endl;
505  fout << "p1 " << DistL.at<double>(2,0) << endl;
506  fout << "p2 " << DistL.at<double>(3,0) << endl;
507 
508  // Right Eye
509  fout << "[right]" << endl;
510  fout << "fx " << Kright.at<double>(0,0) << endl;
511  fout << "fy " << Kright.at<double>(1,1) << endl;
512  fout << "cx " << Kright.at<double>(0,2) << endl;
513  fout << "cy " << Kright.at<double>(1,2) << endl;
514  fout << "k1 " << DistR.at<double>(0,0) << endl;
515  fout << "k2 " << DistR.at<double>(1,0) << endl;
516  fout << "p1 " << DistR.at<double>(2,0) << endl;
517  fout << "p2 " << DistR.at<double>(3,0) << endl;
518 
519  fout.close();
520 }

◆ setExpectedPosition()

void StereoCamera::setExpectedPosition ( Mat &  Rot,
Mat &  Tran 
)

The function set the expected Rotation and Translation parameters for the current image pair.

They can be computed using the Kinematics.

Parameters
Rot3x3 matrix representing the rotation between the left and the right camera.
Tran3x1 vector representing the translation between the left and the right camera.

Definition at line 2416 of file stereoCamera.cpp.

2417 {
2418  R_exp=Rot;
2419  T_exp=Tran;
2420 }

◆ setImages()

void StereoCamera::setImages ( const Mat &  firstImg,
const Mat &  secondImg 
)

It stores in memory a couple of images.

Parameters
firstImgthe images acquired from the first (main) camera
secondImgthe images acquired from the second (secondary) camera

Definition at line 231 of file stereoCamera.cpp.

231  {
232  this->imleft=left;
233  this->imright=right;
234 }

◆ setIntrinsics()

void StereoCamera::setIntrinsics ( Mat &  K1,
Mat &  K2,
Mat &  Dist1,
Mat &  Dist2 
)

It sets the intrinsic parameters.

Parameters
K13x3 camera matrix of the first camera.
K23x3 camera matrix of the second camera.
Dist14x1 distortion coefficients vector of the first camera.
Dist24x1 distortion coefficients vector of the second camera.

Definition at line 1940 of file stereoCamera.cpp.

1940  {
1941  lock_guard<mutex> lg(mtx);
1942  this->Kleft=KL;
1943  this->Kright=KR;
1944  this->DistL=DistL;
1945  this->DistR=DistR;
1946 
1947  if(!this->R.empty() && !this->T.empty())
1948  updatePMatrix();
1949  this->cameraChanged=true;
1950  buildUndistortRemap();
1951 }

Referenced by StereoCamera().

◆ setMatches()

void StereoCamera::setMatches ( std::vector< cv::Point2f > &  pointsL,
std::vector< cv::Point2f > &  pointsR 
)

The function initialize the matches of the current image pair.

For example matches can be computed in GPU with higher framerate.

Parameters
pointsLvector of Point2f representing the keypoints on the left image.
pointsRvector of Point2f representing the keypoints on the right image.

Definition at line 2408 of file stereoCamera.cpp.

2409 {
2410  PointsL=pointsL;
2411  PointsR=pointsR;
2412 
2413 }

◆ setRotation()

void StereoCamera::setRotation ( Mat &  Rot,
int  mode = 0 
)

It sets the rotation matrix (if known) between the first and the second camera.

Parameters
Rotthe 3x3 rotation matrix.
modethe following values are allowed: mode=0 the rotation matrix R is set equal to Rot. mode=1 the rotation matrix R is set equal to Rot*R. mode=2 the rotation matrix R is set equal to Rot*Rinit.

Definition at line 1505 of file stereoCamera.cpp.

1505  {
1506  lock_guard<mutex> lg(mtx);
1507  if(mul==0)
1508  this->R=Rot;
1509  if(mul==1)
1510  this->R=Rot*R;
1511  if(mul==2)
1512  this->R=Rot*Rinit;
1513 
1514  if(R_exp.empty())
1515  R_exp=R;
1516  this->updatePMatrix();
1517  this->cameraChanged=true;
1518 }

Referenced by StereoCamera().

◆ setTranslation()

void StereoCamera::setTranslation ( Mat &  Tras,
int  mul = 0 
)

It sets the translation vector (if known) between the first and the second camera.

Parameters
Trasthe 3x1 translation matrix.
modethe following values are allowed: mode=0 the translation vector T is set equal to Tras. mode=1 the translation vector T is set equal to Tras+T. mode=2 the translation vector T is set equal to Tras+Tinit.

Definition at line 1521 of file stereoCamera.cpp.

1521  {
1522  lock_guard<mutex> lg(mtx);
1523  if(mul==0)
1524  this->T=Tras;
1525  if(mul==1)
1526  this->T=Tras+T;
1527  if(mul==2)
1528  this->T=Tras+Tinit;
1529 
1530  if(T_exp.empty())
1531  T_exp=T;
1532 
1533  if(!this->Kleft.empty() && !this->Kright.empty())
1534  this->updatePMatrix();
1535  this->cameraChanged=true;
1536 }

Referenced by StereoCamera().

◆ stereoCalibration()

void StereoCamera::stereoCalibration ( vector< string >  imageList,
int  boardWidth,
int  boardHeight,
float  sqsize = 1.0 
)

It performs the stereo camera calibration.

(see stereoCalibration module)

Parameters
imageListis the list containing the paths of the images with the chessboard patterns. even indices refer to Left camera images (i.e. main camera images), while odd indices refer to Right camera images.
boardWidththe number of inner corners in the width direction of the chess board pattern (see stereoCalibration module)
boardHeightthe number of inner corners in the height direction of the chess board pattern (see stereoCalibration module)
sqsizethe size of the square of the chess board pattern. It is needed for a metric reconstruction.

Definition at line 250 of file stereoCamera.cpp.

250  {
251  Size boardSize;
252  boardSize.width=boardWidth;
253  boardSize.height=boardHeight;
254  runStereoCalib(imagelist, boardSize,sqsize);
255 
256 }

◆ triangulateKnownDisparity()

Point3f StereoCamera::triangulateKnownDisparity ( float  u,
float  v,
float  d,
Mat &  H 
)

It performs the metric triangulation given the pixel coordinates on the first image and the disparity between the two RECTIFIED images.

The 3D Point is w.r.t the system defined by the parameter H.

Parameters
uthe pixel x coordinate in the first image.
vthe pixel y coordinate in the first image.
dthe disparity on the x coordinate between the two rectified images.
Hthe 4x4 rototranslation matrix of the system can be an empty matrix.
Returns
a metric 3D point w.r.t. the reference system defined by H.

Definition at line 2113 of file stereoCamera.cpp.

2114 {
2115  lock_guard<mutex> lg(mtx);
2116  if(Q.empty())
2117  {
2118  cout << "Run rectifyImages() method first!" << endl;
2119  Point3f point;
2120  point.x=0.0;
2121  point.y=0.0;
2122  point.z=0.0;
2123  return point;
2124  }
2125 
2126  if(H.empty())
2127  H=H.eye(4,4,CV_64FC1);
2128 
2129  Point3f point;
2130 
2131  float w= (float) ((float) d*Q.at<double>(3,2)) + ((float)Q.at<double>(3,3));
2132  point.x= (float)((float) (u)*Q.at<double>(0,0)) + ((float) Q.at<double>(0,3));
2133  point.y=(float)((float) (v)*Q.at<double>(1,1)) + ((float) Q.at<double>(1,3));
2134  point.z=(float) Q.at<double>(2,3);
2135 
2136  // Rectified Camera System
2137  point.x=point.x/w;
2138  point.y=point.y/w;
2139  point.z=point.z/w;
2140 
2141  // We transform to H Coordinate System
2142  Mat RLrectTmp=this->getRLrect().t(); // First it transform the point to the unrectified camera reference system
2143  Mat Tfake = Mat::zeros(0,3,CV_64F);
2144  Mat P(4,1,CV_64FC1);
2145  P.at<double>(0,0)=point.x;
2146  P.at<double>(1,0)=point.y;
2147  P.at<double>(2,0)=point.z;
2148  P.at<double>(3,0)=1;
2149 
2150  Mat Hrect=buildRotTras(RLrectTmp,Tfake);
2151  P=H*Hrect*P;
2152 
2153  point.x=(float) ((float) P.at<double>(0,0)/P.at<double>(3,0));
2154  point.y=(float) ((float) P.at<double>(1,0)/P.at<double>(3,0));
2155  point.z=(float) ((float) P.at<double>(2,0)/P.at<double>(3,0));
2156 
2157  return point;
2158 }

References getRLrect().

◆ triangulation() [1/2]

Point3f StereoCamera::triangulation ( Point2f &  point1,
Point2f &  point2 
)

It performs the triangulation using the stored in the internal P1 and P2 3x4 Camera Matrices.

The triangulation obtained is not metric! Use the method metricTriangulation if you want a metric triangulation.

Parameters
point1the 2D point coordinates in the first image.
point2the 2D point coordinates in the second image.
Returns
a 3D point wrt the first camera reference system.

Definition at line 861 of file stereoCamera.cpp.

861  {
862 
863  Point3f point3D;
864  Mat J=Mat(4,4,CV_64FC1);
865  J.setTo(0);
866  for(int j=0; j<4; j++) {
867 
868  int rowA=0;
869  int rowB=2;
870 
871  J.at<double>(0,j)=(pointleft.x*this->Pleft.at<double>(rowB,j))- (this->Pleft.at<double>(rowA,j));
872  J.at<double>(2,j)=(pointRight.x*this->Pright.at<double>(rowB,j))- (this->Pright.at<double>(rowA,j));
873 
874  rowA=1;
875 
876  J.at<double>(1,j)=(pointleft.y*this->Pleft.at<double>(rowB,j))- (this->Pleft.at<double>(rowA,j));
877  J.at<double>(3,j)=(pointRight.y*this->Pright.at<double>(rowB,j))- (this->Pright.at<double>(rowA,j));
878  }
879  SVD decom(J);
880  Mat V= decom.vt;
881 
882  point3D.x=(float) ((float) V.at<double>(3,0))/((float) V.at<double>(3,3));
883  point3D.y=(float) ((float) V.at<double>(3,1))/((float) V.at<double>(3,3));
884  point3D.z=(float) ((float) V.at<double>(3,2))/((float) V.at<double>(3,3));
885  return point3D;
886 
887 }

Referenced by chierality().

◆ triangulation() [2/2]

Point3f StereoCamera::triangulation ( Point2f &  point1,
Point2f &  point2,
Mat  Camera1,
Mat  Camera2 
)

It performs the triangulation (HZ Chap 12.2 homogenous solution).

The triangulation obtained is not metric! Use the method metricTriangulation if you want a metric triangulation.

Parameters
point1the 2D point coordinates in the first image.
point2the 2D point coordinates in the second image.
Camera1the 3x4 camera matrix of the first image.
Camera2the 3x4 camera matrix of the second image.
Returns
a 3D point wrt the first camera reference system.

Definition at line 1370 of file stereoCamera.cpp.

1370  {
1371 
1372  Point3f point3D;
1373  Mat J=Mat(4,4,CV_64FC1);
1374  J.setTo(0);
1375 
1376  for(int j=0; j<4; j++) {
1377 
1378  int rowA=0;
1379  int rowB=2;
1380 
1381  J.at<double>(0,j)=(pointleft.x*Camera1.at<double>(rowB,j))- (Camera1.at<double>(rowA,j));
1382  J.at<double>(2,j)=(pointRight.x*Camera2.at<double>(rowB,j))- (Camera2.at<double>(rowA,j));
1383 
1384  rowA=1;
1385 
1386  J.at<double>(1,j)=(pointleft.y*Camera1.at<double>(rowB,j))- (Camera1.at<double>(rowA,j));
1387  J.at<double>(3,j)=(pointRight.y*Camera2.at<double>(rowB,j))- (Camera2.at<double>(rowA,j));
1388  }
1389  SVD decom(J);
1390  Mat V= decom.vt;
1391 
1392  // printMatrix(V);
1393 
1394  /*Mat sol=Mat(4,1,CV_64FC1);
1395  sol.at<double>(0,0)=V.at<double>(0,0);
1396  sol.at<double>(1,0)=V.at<double>(1,1);
1397  sol.at<double>(2,0)=V.at<double>(2,2);
1398  sol.at<double>(3,0)=V.at<double>(3,3);
1399 
1400  Mat test=J*sol;
1401 
1402  printMatrix(test);*/
1403  point3D.x=(float) ((float) V.at<double>(3,0))/((float) V.at<double>(3,3));
1404  point3D.y=(float) ((float) V.at<double>(3,1))/((float) V.at<double>(3,3));
1405  point3D.z=(float) ((float) V.at<double>(3,2))/((float) V.at<double>(3,3));
1406  return point3D;
1407 
1408 }

◆ triangulationLS()

Point3f StereoCamera::triangulationLS ( Point2f &  point1,
Point2f &  point2,
Mat  Camera1,
Mat  Camera2 
)

It performs the least square triangulation (HZ Chap 12.2 Inhomogenous solution).

The triangulation obtained is not metric! Use the method metricTriangulation if you want a metric triangulation.

Parameters
point1the 2D point coordinates in the first image.
point2the 2D point coordinates in the second image.
Camera1the 3x4 camera matrix of the first image.
Camera2the 3x4 camera matrix of the second image.
Returns
a 3D point wrt the first camera reference system.

◆ undistortImages()

void StereoCamera::undistortImages ( )

It undistorts the images.

Note
Set undistortion coefficients before using this method.

Definition at line 1475 of file stereoCamera.cpp.

1475  {
1476  if(this->Kleft.empty() || this->DistL.empty() || this->Kright.empty() || this->DistR.empty()) {
1477  cout <<" Cameras are not calibrated! Run the Calibration first!" << endl;
1478  return;
1479  }
1480  if(this->imleft.empty() || this->imright.empty()) {
1481  cout << "Images are not set! set the images first!" << endl;
1482  return;
1483  }
1484 
1485  undistort(this->imleft,this->imleftund,this->Kleft,this->DistL);
1486  undistort(this->imright,this->imrightund,this->Kright,this->DistR);
1487 }

◆ updateMappings()

void StereoCamera::updateMappings ( )

XXXXXXXXXXXXXXXXXXXX.

Returns
XXXXXXXXXXXXXXXXXXXX

Definition at line 2437 of file stereoCamera.cpp.

2438 {
2439 
2440  int rows = this->getImLeft().rows;
2441  int cols = this->getImLeft().cols;
2442 
2443  if (cameraChanged)
2444  {
2445 
2446  std::lock_guard<std::mutex> lock(mtx);
2447 
2448  Mat inverseMapL(rows*cols,1,CV_32FC2);
2449  Mat inverseMapR(rows*cols,1,CV_32FC2);
2450 
2451  for (int y=0; y<rows; y++)
2452  {
2453  for (int x=0; x<cols; x++)
2454  {
2455  inverseMapL.ptr<float>(y*cols+x)[0]=(float)x;
2456  inverseMapL.ptr<float>(y*cols+x)[1]=(float)y;
2457  inverseMapR.ptr<float>(y*cols+x)[0]=(float)x;
2458  inverseMapR.ptr<float>(y*cols+x)[1]=(float)y;
2459  }
2460  }
2461 
2462  undistortPoints(inverseMapL,inverseMapL,this->Kleft,this->DistL,this->RLrect,this->PLrect);
2463  undistortPoints(inverseMapR,inverseMapR,this->Kright,this->DistR,this->RRrect,this->PRrect);
2464 
2465  Mat mapperL=inverseMapL.reshape(2,rows);
2466  Mat mapperR=inverseMapR.reshape(2,rows);
2467  this->MapperL=mapperL;
2468  this->MapperR=mapperR;
2469 
2470  }
2471 
2472 }
const Mat & getImLeft() const
It returns the left (first) image.

References getImLeft().


The documentation for this class was generated from the following files: