Determining the position of the camera in augmented reality based only on visual information still poses a great challenge. Particularly problematic is initialization of the algorithm in a complete absence of pre-existing information about the structure of the scene. In this thesis, we developed a solution based on an existing algorithm for camera localization in augmented reality (PTAM) which is able to robustly and automatically initialize itself by importing a previously reconstructed scene or part of it into the algorithm for tracking and it can also indirectly determine the scale of the scene. Method recognises the current scene on the input recording and uses appropriate model for localization. The proposed solution was evaluated on a test dataset with a collection of prepared recordings and scene reconstructions. It has been shown that our approach improved the success rate and precision of the original PTAM algorithm. Our solution was also compared with a reference algorithm for tracking and mapping ORB-SLAM2 and it has been shown that our solution achieved a better success rate with comparable error while also being faster. At the end, the usability of our software solution in augmented reality was demonstrated with an example application.
|