izpis_h1_title_alt

Razlage modelov strojnega učenja s posploševanjem predznanja
ID Stepišnik Perdih, Timen (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window, ID Pollak, Senja (Comentor)

.pdfPDF - Presentation file, Download (2,31 MB)
MD5: 805AC4D544465E5DB6CA21A991EB1715

Abstract
S porastom uporabe modelov strojnega učenja in pospešenim razvojem kompleksnih modelov se pojavlja tudi potreba po učinkovitih rešitvah razložljive umetne inteligence, ki poskuša približati delovanje modelov človeškemu razumevanju. V tem delu predstavimo novo metodologijo razlaganja modelov, ki povezuje že uveljavljene metode luščenja najpomembnejših značilk z metodami gručenja in novim pristopom posploševanja razlag na podlagi strojno berljivega predznanja. Za demonstracijo delovanja zasnovane metode ustvarimo umetno domeno in jo ovrednotimo na dveh različnih realnih javno dostopnih podatkovnih množicah, s čimer prikažemo uporabnost metode in kompatibilnost z različnimi tipi podatkov. Za ocenjevanje kakovosti razlag pridobimo mnenje domenskih strokovnjakov. Posplošitve so ocenjene kot smiselne, vendar za potrebe strokovnjakov presplošne, zato predlagamo možnosti uporabe za druge skupine uporabnikov. Metodo implementiramo v obliki javno dostopne knjižnice Python.

Language:Slovenian
Keywords:razložljiva umetna inteligenca, razlage modelov, posploševanje razlag, strojno učenje, gručenje, procesiranje naravnega jezika
Work type:Master's thesis/paper
Typology:2.09 - Master's Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-161735 This link opens in a new window
COBISS.SI-ID:210383619 This link opens in a new window
Publication date in RUL:13.09.2024
Views:174
Downloads:140
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Explaining machine learning models using background knowledge generalization
Abstract:
With the growing use of machine learning models and the accelerated development of complex models, there is a need for efficient solutions in the field of explainable artificial intelligence. In this work, we propose a new method of explaining models that couples state-of-the-art "most important" feature extraction methods with clustering methods and a new approach to feature generalization using machine-readable domain knowledge. We demonstrate the methodology on an artificially created domain and two different real domains. We evaluate the quality of explanations with the help of domain experts. Generalizations are evaluated as sensible but too general for researchers. We propose potential use cases for other groups of users. The implemented method is created as a publicly available Python library.

Keywords:explainable AI, model explanations, explanation generalization, machine learning, clustering, natural language processing

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back