One of the main use cases of augmented reality is to add objects and markings to a 3D space captured by a mobile device camera. New objects in a scene need proper lighting that reflects real lighting, as this way they appear more realistic and blend in better with the surroundings. In this dissertation, we developed a method for fast and robust detection of scene lighting using convolutional neural networks, which could be used in the context of augmented reality. We also created a dataset consisting of synthetic images used for training the convolutional neural network. We trained multiple models with different backbone architectures, and we compared their accuracy and speed on a dataset consisting of captured photos from the real world. The results of experiments demonstrate that convolutional neural networks can successfully determine the direction of the main light source on data not seen by the network during training. In the end, we visualized the results on some of the captured real world photos.
|