The thesis deals with the problem of rendering reflective materials in computer graphics. To solve this problem, we developed two convolutional neural networks, the goal of which was to create the best possible images of the surroundings at a certain point of the rendered scene. Both approaches are based on the phenomenon of overfitting, which made it necessary to train each network individually for specific scenes. The first network is designed in such a way that it receives x, y and z coordinates in the scene at the input and generates an image of the surroundings at that point at the output. In the second approach, we divided the scene by triangulation and captured images of the surrounding area in all vertices of the triangles. Thus, the second network at the input receives three images of the surroundings (captured in the vertices of the triangle) and weights that reflect the distance of a point from these vertices, and at the output it provides a prediction of the image of the surroundings at a certain point in space. Both approaches successfully predict images of the surroundings at desired points in space, even if these were not part of the training set, but their accuracy depends on the complexity of the scene itself. Both methods solve the problem of sharp transitions between reflection probes when moving around the scene and are thus suitable for rendering reflections on moving objects.
|