The fast progress in artificial intelligence (AI), combined with the constantly widening scope of its practical applications also necessitates the need for AI to be understandable to humans. This issue is the key focus of the field of Explainable AI, which aims to develop approaches to AI which would make its decisions and actions more comprehensible to humans interacting with it.
We used machine learning from examples of human explanations to develop an algorithm which can automatically generate explanations of its problem-solving process in natural language. Specifically, it explains plans in the blocks world domain.
We recorded human participants explaining the reasons for their actions as they solved blocks world problems, and transformed their explanations into a form which could then be used for machine learning. We used these examples to induce a classifier which our planner can use to select an appropriate explanation in any given situation.
The use of machine learning from human explanations is a hitherto unexplored idea in explainable planning. Our results represent the first demonstration of this approach. This opens the possibility of numerous practical applications on other, more complex planning domains.
|