izpis_h1_title_alt

TD learning in Monte Carlo tree search : masters thesis
ID DELEVA, ALEKSANDRA (Author), ID Šter, Branko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (6,73 MB)
MD5: B8377E12555AF7A3B8A25F2BD42AA93D
PID: 20.500.12556/rul/1d5a8994-f0a4-44cb-a273-e1d4ef83af5d

Abstract
Monte Carlo tree search (MCTS) has become well known with its success in the game of Go. A computer has never before won a game against a human master player before. There have been multiple variations of the algorithm since. One of the best known versions is the Upper Confidence Bounds for Trees (UCT) by Kocsis and Szepesv´ari. Many of the enhancements to the basic MCTS algorithm include usage of domain specific heuristics, which make the algorithm less general. The goal of this thesis is to investigate how to improve the MCTS algorithm without compromising its generality. A Reinforcement Learning (RL) paradigm, called Temporal Difference (TD) learning, is a method that makes use of two concepts, Dynamic Programming (DP) and the Monte Carlo (MC) method. Our goal was to try to incorporate the advantages of the TD learning paradigm into the MCTS algorithm. The main idea was to change how rewards for each node are calculated, and when they are updated. From the results of the experiments, one can conclude that the combination of the MCTS algorithm and the TD learning paradigm is after all a good idea. The newly developed Sarsa-TS(λ) shows a general improvement on the performance. Since the games we have done our experiments on are all very different, the effect the algorithm has on the performance varies.

Language:English
Keywords:Monte Carlo tree search, Monte Carlo, tree search, upper confidence bounds for trees, temporal difference learning, reinforcement learning, artificial Intelligence
Work type:Master's thesis/paper
Typology:2.09 - Master's Thesis
Organization:FRI - Faculty of Computer and Information Science
Publisher:[A. Deleva]
Year:2015
Number of pages:56 str.
PID:20.500.12556/RUL-72493 This link opens in a new window
COBISS.SI-ID:1536598211 This link opens in a new window
Publication date in RUL:24.09.2015
Views:1480
Downloads:502
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:Slovenian
Title:Učenje s časovnimi razlikami pri drevesnem preiskovanju Monte Carlo
Abstract:
Drevesno preiskovanje Monte Carlo(MCTS) je postalo znano po zaslugi uspehov v igri Go, pri kateri računalnik nikoli prej ni premagal človeškega mojstra. Nastalo je več različic algoritma. Ena izmed najbolj znanih različic je Zgornja meja zaupanja za drevesa oz. UCT (Kocsis in Szepesvari). Mnoge izboljšave osnovnega algoritma MCTS vključujejo uporabo domenskih hevristik, zaradi katerih pa algoritem izgubi na splošnosti. Cilj tega magistrskega dela je bil raziskati, kako izboljšati algoritem MCTS brez ogrožanja njegove splošnosti. Paradigma spodbujevalnega učenja, ki se imenuje učenje s časovnimi razlikami, omogoča uporabo kombinacije dveh konceptov, dinamičnega programiranja in metod Monte Carlo. Moj cilj je bil vključiti prednosti učenja s časovnimi razlikami v algoritem MCTS. Na ta način se spremeni način posodabljanja vrednosti vozlišč glede na rezultat oz. nagrado. Iz rezultatov je mogoče sklepati, da je kombinacija algoritma MCTS in učenja s časovnimi razlikami dobra ideja. Na novo razvit algoritem Sarsa-TS(λ) kaže na splošno izboljšanje uspešnosti igranja. Ker pa so igre, na katerih so bili izvedeni poskusi, zelo različne narave, se učinek algoritma na uspešnost posameznih iger lahko precej razlikuje.

Keywords:drevesno preiskovanje Monte Carlo, Monte Carlo, drevesno preiskovanje, zgornja meja zaupanja za drevesa, učenje s časovnimi razlikami, umetna inteligenca

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back