izpis_h1_title_alt

Inteligentni agent z omejenimi viri v dinamični računalniški igri
ID McPartlin, Declan (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (1,85 MB)
MD5: 44166E1FAA825F10F66ABD5D1A55FE78
PID: 20.500.12556/rul/d214e690-671f-4fe0-9730-13b95e4f4459

Abstract
Z inteligentnimi agenti se dandanes srečujemo dnevno – ko uporabljamo medmrežje, ko letimo z letalom ali kadar igramo računalniške igre. Vsi takšni agenti imajo mnogo skupnih lastnosti, nekateri za svoje delovanje potrebujejo veliko računalniških virov. V nekaterih primerih so viri za učinkovito delovanje agentov omejeni, kot npr. na mobilnih napravah. V takšnih primerih potrebujemo čim manj zahtevnega, a hkrati dovolj pametnega agenta, ki še zmeraj verno posnema človeško obnašanje. Čeprav se popularnost večigralnega načina v igrah v realnem času veča na stacionarnih konzolah, na mobilnih napravah hitra in zagotovljena medmrežna povezava ni zmeraj mogoča. Zaradi tega se večigralni način igre v realnem času na mobilnih napravah ne uporablja tako množično – bolj pogosto se uporabljata asinhroni večigralni način igre in enoigralni način. V enoigralnem načinu igre mora simulirani nasprotnik za dobro uporabniško izkušnjo igralca čim bolje posnemati človeka. Potrebujemo torej inteligentnega agenta, ki ima sposobnost učenja z omejenimi viri. V magistrski nalogi na primeru dinamične igre prikažemo nekaj različnih algoritmov umetne inteligence, ki poskušajo doseči ta cilj. Kot primer implementacije inteligentnega agenta smo uporabili igro Knoxball. Knoxball je dinamična igra, ki je mešanica med nogometom in zračnim hokejem. Agenta smo zgradili na podlagi spodbujevanega učenja, ki z razliko od nadzorovanega učenja, ne potrebuje označenih podatkov, saj se uči iz lastnih izkušenj. Agent ne pozna pravil igre, uči se iz izkušenj in na podlagi povratnih informacij iz okolja. Učinkovitost našega inteligentnega agenta je naraščala z nabranimi izkušnjami – sčasoma se je izkazal kot uporaben. Na koncu testiranja je brez večjih težav premagal agenta, izdelanega na podlagi končnega avtomata, ki je služil za osnovo. Dobre rezultate je dosegel tudi proti človeškim igralcem.

Language:Slovenian
Keywords:inteligentni agent, omejeni viri, spodbujevano učenje, dinamična igra
Work type:Master's thesis/paper
Organization:FRI - Faculty of Computer and Information Science
Year:2016
PID:20.500.12556/RUL-81189 This link opens in a new window
Publication date in RUL:31.03.2016
Views:1895
Downloads:464
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Limited resources based Intelligent agent for dynamic computer games
Abstract:
Nowadays we encounter intelligent agents on a daily basis, when we use the internet, take a flight, or when we play video games. All these agents have many things in common, some of them require a lot of resources. In some situations, for instance in mobile devices, we might not have the desired resources to power such an agent. In these cases we need less demanding yet intelligent agents, that still closely mimic human behaviour. Although real time multiplayer gaming is growing in popularity on stationary consoles, a secure and fast internet connection is not always available on handheld devices, which makes real time multiplayer gaming less attractive on handheld devices. Because of this, many mobile games tend to opt for asynchronous multiplayer modes or put more focus on a single player mode. To create the best user experience in single player mode we want to make the opponent seem human. We need an intelligent agent that can learn with limited resources. Therefore, we need to combine several different artificial intelligence approaches. We chose Knoxball, a mobile game, as an example environment to implement our agent. Knoxball is a dynamic game that is a mix between soccer and air hockey. Our agent is based on a combination of reinforcement learning algorithms. Reinforcement learning doesn't need labelled data to start learning, it gathers the data on its own. The agent was never taught the rules of Knoxball, it learns from experience, using feedback from the environment. After the initial learning period, our agent has proved to be versatile and able to play Knoxball fairly well. The agent's efficiency grew with time. At the end of testing, our agent was able to beat the finite state machine based agent. The agent faired well against human players too.

Keywords:intelligent agent, limited resources, reinforcement learning, dynamic game

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back