izpis_h1_title_alt

Razlaga v umetni inteligenci na podlagi strojnega učenja iz človeških razlag
ID Volavšek, Timotej (Author), ID Bratko, Ivan (Mentor) More about this mentor... This link opens in a new window

URLURL - Presentation file, Visit http://pefprints.pef.uni-lj.si/6855/ This link opens in a new window

Abstract
Zaradi hitrega razvoja in vsak dan širše prisotnih praktičnih aplikacij umetne inteligence se vse bolj pojavlja tudi potreba po razumljivosti inteligentnih sistemov. S tem problemom se ukvarja področje razložljive umetne inteligence, ki stremi k razvoju umetne inteligence z zmožnostjo razlage svojih odločitev na človeku razumljiv način. Zanimalo nas je, ali lahko s strojnim učenjem na podlagi človeških razlag rešitev nalog planiranja sestavimo algoritem za avtomatsko generiranje razlag planov. S tem namenom smo izvedli poskus, v katerem so udeleženci razložili svoje odločitve tekom reševanja problemov iz domene sveta kock. Njihove razlage smo pretvorili v obliko, primerno za strojno učenje, in jih uporabili za indukcijo odločitvenega drevesa. Na podlagi naučenega klasifikatorja lahko algoritem za planiranje nato izbere ustrezno vrsto razlage za vsako svojo akcijo. Tako dobimo razlage, ki so bolj podobne človeškim in posledično bolj razumljive. Razložljiva umetna inteligenca na podlagi učenja iz človeških razlag je na področju planiranja še neraziskana ideja. Naši rezultati predstavljajo prvi poskus s tem pristopom. Odpirajo tudi možnost številnih praktičnih aplikacij na drugih, bolj kompleksnih domenah planiranja.

Language:Slovenian
Keywords:razložljiva umetna inteligenca
Work type:Master's thesis/paper
Typology:2.09 - Master's Thesis
Organization:PEF - Faculty of Education
Year:2021
PID:20.500.12556/RUL-128892 This link opens in a new window
COBISS.SI-ID:72797187 This link opens in a new window
Publication date in RUL:16.08.2021
Views:913
Downloads:136
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Explanation in artificial intelligence based on machine learning from human explanations
Abstract:
The fast progress in artificial intelligence (AI), combined with the constantly widening scope of its practical applications also necessitates the need for AI to be understandable to humans. This issue is the key focus of the field of Explainable AI, which aims to develop approaches to AI which would make its decisions and actions more comprehensible to humans interacting with it. We used machine learning from examples of human explanations to develop an algorithm which can automatically generate explanations of its problem-solving process in natural language. Specifically, it explains plans in the blocks world domain. We recorded human participants explaining the reasons for their actions as they solved blocks world problems, and transformed their explanations into a form which could then be used for machine learning. We used these examples to induce a classifier which our planner can use to select an appropriate explanation in any given situation. The use of machine learning from human explanations is a hitherto unexplored idea in explainable planning. Our results represent the first demonstration of this approach. This opens the possibility of numerous practical applications on other, more complex planning domains.

Keywords:explainable artificial intelligence

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back