izpis_h1_title_alt

Strategije drevesnega preiskovanja Monte Carlo
ID Vodopivec, Tom (Author), ID Šter, Branko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (2,59 MB)
MD5: 6A076684EFB1A4CAA7861771CD685635
PID: 20.500.12556/rul/74d980bc-2c7d-4870-81d4-1298aa471d68

Abstract
Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konvergenco – to še posebej velja za temeljne algoritme MCTS, ki ne uporabljajo dodatnih izboljšav. Zaradi tega jih raziskovalci in programerji pogosto združujejo z ekspertnim znanjem, hevristikami in ročno izdelanimi strategijami. Kljub izrazitim dosežkom tako izboljšanih metod (primer je AlphaGo, ki je nedavno prekosil najboljšega človeškega igralca igre Go na svetu in s tem premagal ta velik izziv umetne inteligence), takšne domensko-specifične izboljšave zmanjšujejo splošnost številnih aplikativnih algoritmov. Izboljšati temeljne algoritme MCTS, brez izgube njihove splošnosti in prilagodljivosti, je težko in predstavlja enega aktualnih raziskovalnih izzivov. Ta disertacija uvaja nov pristop za nadgradnjo temeljnih metod MCTS in izpopolnjuje temeljno razumevanje tega področja v luči starejšega ter uveljavljenega področja spodbujevalnega učenja (ang. reinforcement learning). Povezava med drevesnim preiskovanjem Monte Carlo, ki ga skupnost uvršča med metode za preiskovanje in planiranje, ter spodbujevalnim učenjem je že bila nakazana v preteklosti, a še ni bila temeljito preučena in tudi še ni pomembno vplivala na širšo skupnost umetne inteligence. S to motivacijo v tem delu poglobljeno analiziramo povezavo med tema dvema področjema, tako da identificiramo in opišemo podobnosti ter razlike med njima. Uvajamo praktičen pristop razširitve metod MCTS s koncepti iz spodbujevalnega učenja: naše novo ogrodje, drevesno preiskovanje s časovnimi razlikami (ang. temporal difference tree search – TDTS), pooseblja novo družino algoritmov, ki delujejo po konceptih MCTS, obenem pa za učenje koristijo časovne razlike (ang. temporal differences) namesto vzorčenja Monte Carlo. To lahko razumemo kot posplošitev metod MCTS z učenjem s časovnimi razlikami in sočasno kot posplošitev klasičnih metod učenja s časovnimi razlikami z drevesnim preiskovanjem in ostalimi koncepti iz metod MCTS (kot so postopna širitev drevesa in uporaba privzete strategije). S pomočjo metod TDTS pokažemo, da uporaba uveljavljenih konceptov iz spodbujevalnega učenja v navezi z drevesnim preiskovanjem odpira možnosti za razvoj širokega spektra novih algoritmov, od katerih so klasične metode MCTS le ena izmed variant. V naših eksperimentih preizkusimo več tovrstnih algoritmov, osredotočimo pa se na razširitev algoritma UCT (ang. upper confidence bounds for trees) z algoritmom Sarsa(\lambda), ki je eden temeljnih algoritmov spodbujevalnega učenja. Naše meritve potrjujejo, da algoritmi TDTS dosegajo boljše rezultate na enostavnih igrah za enega igralca, klasičnih igrah za dva igralca in arkadnih video igrah: novi algoritmi ohranjajo robustnost in računsko ter pomnilniško zahtevnost, obenem pa konsistentno prekašajo algoritme MCTS. Naše ugotovitve zmanjšujejo razkorak med drevesnim preiskovanjem Monte Carlo in spodbujevalnim učenjem ter pozivajo k močnejšemu nadaljnjemu povezovanju teh dveh področij. Nazadnje, ta disertacija spodbuja k raziskovanju in uveljavljanju bolj enotnega pogleda na dve izmed temeljnih paradigem umetne inteligence – preiskovanje in učenje.

Language:English
Keywords:preiskovanje, planiranje, spodbujevalno učenje, drevesno preiskovanje Monte Carlo, igranje iger
Work type:Doctoral dissertation
Organization:FRI - Faculty of Computer and Information Science
Year:2018
PID:20.500.12556/RUL-99313 This link opens in a new window
COBISS.SI-ID:1537691587 This link opens in a new window
Publication date in RUL:12.01.2018
Views:2838
Downloads:1304
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:Slovenian
Title:Monte Carlo tree search strategies
Abstract:
Since their breakthrough in computer Go, Monte Carlo tree search (MCTS) methods have initiated almost a revolution in game-playing agents: the artificial intelligence (AI) community has since developed an enormous amount of MCTS variants and enhancements that advanced the state of the art not only in games, but also in several other domains. Although MCTS methods merge the generality of random sampling with the precision of tree search, their convergence rate can be relatively low in practice, especially when not aided by additional enhancements. This is why practitioners often combine them with expert or prior knowledge, heuristics, and handcrafted strategies. Despite the outstanding results (like the AlphaGo engine, which defeated the best human Go players, prodigiously overcoming this grand challenge of AI), such task-specific enhancements decrease the generality of many applied MCTS algorithms. Improving the performance of core MCTS methods, while retaining their generality and scalability, has proven difficult and is a current research challenge. This thesis presents a new approach for general improvement of MCTS methods and, at the same time, advances the fundamental theory behind MCTS by taking inspiration from the older and well-established field of reinforcement learning (RL). The links between MCTS, which is regarded as a search and planning framework, and the RL theory have already been outlined in the past; however, they have neither been thoroughly studied yet, nor have the existing studies significantly influenced the larger game AI community. Motivated by this, we re-examine in depth the close relation between the two fields and detail not only the similarities, but identify and emphasize also the differences between them. We present a practical way of extending MCTS methods with RL dynamics: we develop the temporal difference tree search (TDTS) framework, a novel class of MCTS-like algorithms that learn via temporal-differences (TD) instead of Monte Carlo sampling. This can be understood both as a generalization of MCTS with TD learning, as well as an extension of traditional TD learning methods with tree search and novel MCTS concepts. Through TDTS we show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We experiment with several such algorithms, focusing on an extension of the upper confidence bounds for trees (UCT) algorithm with the Sarsa(\lambda) algorithm – an on-policy TD learning method. Our evaluations confirm that MCTS-like methods inspired by RL dynamics demonstrate superior results on several classic board games and arcade video games: the new algorithms preserve robustness and computational complexity, while consistently outperforming their basic MCTS counterparts. Our findings encourage cross-fertilization between the game AI and RL communities, hopefully narrowing the gap between them, and promote a unified view of search and learning methods, recommending it as a promising research direction.

Keywords:search, planning, reinforcement learning, Monte Carlo tree search, game playing

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back