Deep learning has been a field of great academic interest and substantial breakthroughs over the last decade. Its applications are many and over the last five years it has spread also to the field of game playing, owing largely to two chief accomplishments of Google's DeepMind team: Deep Q-Networks (DQN), which learned to play classic Atari 2600 games, and AlphaZero, which learned, strictly through self-play, to play the board games chess, shogi and Go. In this thesis we attempted to build on the success of AlphaZero by adapting its self-playing architecture to fighting games, a popular genre of video games. The results were, however, less successful than we had expected and hoped, as the time constraints proved to be an insurmountable obstacle.
|