Nowadays we encounter intelligent agents on a daily basis, when we use the internet, take a flight, or when we play video games. All these agents have many things in common, some of them require a lot of resources. In some situations, for instance in mobile devices, we might not have the desired resources to power such an agent. In these cases we need less demanding yet intelligent agents, that still closely mimic human behaviour.
Although real time multiplayer gaming is growing in popularity on stationary consoles, a secure and fast internet connection is not always available on handheld devices, which makes real time multiplayer gaming less attractive on handheld devices. Because of this, many mobile games tend to opt for asynchronous multiplayer modes or put more focus on a single player mode. To create the best user experience in single player mode we want to make the opponent seem human. We need an intelligent agent that can learn with limited resources. Therefore, we need to combine several different artificial intelligence approaches.
We chose Knoxball, a mobile game, as an example environment to implement our agent. Knoxball is a dynamic game that is a mix between soccer and air hockey. Our agent is based on a combination of reinforcement learning algorithms. Reinforcement learning doesn't need labelled data to start learning, it gathers the data on its own. The agent was never taught the rules of Knoxball, it learns from experience, using feedback from the environment.
After the initial learning period, our agent has proved to be versatile and able to play Knoxball fairly well. The agent's efficiency grew with time. At the end of testing, our agent was able to beat the finite state machine based agent. The agent faired well against human players too.
|