The thesis explores the problem of parking inside a simulator with the help of
a reinforcement learning algorithm DDPG. We get familiar with the theoretical background of reinforcement learning, neural networks, and an in-depth
knowledge of DDPG. Based on our knowledge we implement an agent capable of parking in an empty parking lot. We compare different neural network
architectures and how changing the depth and width affect the results. We
compare the results based on the percentage of successful episodes, the average steps necessary for a successful episode, and the paths the car made
during parking.
The most successful architecture solved the problem of parking from a random starting point 100% and in on average 20 steps. We then tested this
architecture on courses with obstacles that represented gradually harder degrees of difficulty for perpendicular, reverse and parallel parking. The results are promising and offer room for further research and development
|