In future, robots will largely replace human labour. They will have to be as autonomous as possible. This includes the ability to self-navigate through space. A successful navigation requires the knowledge of one's location in space.
Many methods that deal with the problem of self-localization exist. Usually these methods use data acquired with a depth sensor. In this thesis we explore the possibilities of self-localization using only panoramic images obtained with omnidirectional camera. Localization is performed using statistical methods PCA, KPCA, CCA and KCCA. These methods produce a low-dimensional subspace from high-dimensional input images. These images are than projected onto the subspace, which gives us an alternative representation of the environment that can be used for predicting the locations of test images. All methods are implemented for use with mobile robot ATRV mini. The accuracy of self-localization is evaluated and few suggestions for the improvement are proposed.