izpis_h1_title_alt

Strojno učenje v pokru
ID URANKAR, JAN (Author), ID Demšar, Janez (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (688,35 KB)
MD5: 4A1CE800C589FC74BB2726BB08015488

Abstract
Cilj diplomskega dela je raziskati razlike med igro agentov, naučenih s pomočjo umetne inteligence, in človeka v pokru. Prva koraka sta bila učenje različice pokra, imenovane No Limit Texas Holde’m, in spoznavanje pravil igre. Naslednji korak je bil izgradnja poker agenta, ki bi ga lahko s pomočjo strojnega učenja naučili igranja igre. Odločili smo se za algoritem, ki spada v družino algoritmov spodbujevanega učenja, imenovan counterfactual regret minimization. Agenta smo naučili igre poker. Zatem smo inicializirali dva agenta, ki sta igrala drug proti drugemu, pri čemer smo beležili njune poteze. Ko smo generirali dovolj iger agentov, smo poiskali podatke o igrah ljudi. V naslednjem koraku smo pripravili skripto za analizo med podatki igre agentov in podatki igre ljudi. V tej skripti smo analizirali več vidikov, ki so bili osnova za sklep o slogu igre določenega igralca ali agenta. Na podlagi poskusa smo prišli do sklepa, da so agenti, naučeni s pomočjo našega algoritma, veliko dejavnejši in agresivnejši od ljudi. Dejavna igra v tem kontekstu pomeni, da igralec igra veliko iger, agresivna igra pa, da veliko stavi in da tudi pogosto zavaja ali ’blefira’. Te ugotovitve se ujemajo z ugotovitvami drugih ekip, ki so s pomočjo umetne inteligence učili inteligentne agente na področju pokra z algoritmom counterfactual regret minimization.

Language:Slovenian
Keywords:strojno učenje, množični podatki, poker, counterfactual regret minimization.
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2021
PID:20.500.12556/RUL-130299 This link opens in a new window
COBISS.SI-ID:77195779 This link opens in a new window
Publication date in RUL:13.09.2021
Views:1532
Downloads:99
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Machine learning in poker
Abstract:
The goal of this diploma thesis is finding the differences betwen human and artifical agent in poker. The first step was preforming research on the specific version of poker, named No Limit Texas Holde’m and learning the rules of the game. The next step was a creation of an intelligent poker agent, which is trained to play the specified version of poker, using machine learning. We decided to use an algorithm, which belongs in the family of machine learning algorithms, known as reinforcement learning. The algorithm is called counterfactual regret minimization. After the selection of the algorithm we trained the intelligent agent and created two instances of the same agent. Those two agents than played poker against each other and we were monitoring and noting every move they made. When we generated enough games between agents, we created a script for analysing the data. In the script we analysed several aspects of the game, which were the basis for our conclusion on the game style of certain virtual agent versus human player. Our conclusion on the basis of the experiment is, that virtual agents, trained with our algorithm, play much more actively and more aggressively than humans. Active game in this context means, that a player plays many games. Aggressive game means that the player bets a lot and often bluffs. These findings are in line with the other researchers findings, which used counter- factual regret minimization for creating an intelligent agent.

Keywords:machine learning, big data, poker, counterfactual regret minimization

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back