In this thesis, we develop a process for calibrating stereo cameras aimed at merging point clouds obtained from stereo depth data for use in sensing systems. Our process is based on the detection of a chessboard pattern in images and the calculation of camera parameters through the positions of chessboard corners in the image pairs. We achieve more robust calibration by additionally verifying the orientations of the chessboards. Point clouds are merged through the calculated extrinsic parameters of the cameras. We evaluate the accuracy of our calibration method using quantitative measures, specifically reprojection and depth errors, alongside qualitative measures via visual inspection, to thoroughly validate our method. We measure the computational complexity of the algorithms and analyze their applicability in real-time systems.
|