izpis_h1_title_alt

Explanations of medical prediction models using background knowledge
ID LUMBUROVSKA, LEONIDA (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (941,34 KB)
MD5: 4C583288E264251881E70635332E6D5D

Abstract
Prediction models are very useful in many areas, as they provide decisions as well as an understanding of the problem. In medicine, they are often used to predict diseases, outbreaks, reactions to medications, etc. Data scientists are striving to improve these models to get more accurate results as well as a better understanding of different phenomena. Since deep learning models are considered black boxes, the output decisions are not easily explained, but their interpretation would be very beneficial. In this thesis, two different approaches to medical model interpretation are shown. The first explains with contextual decomposition, focusing not only on the importance of singular features but also on interactions between them. This way, we can understand complex features and their role in models. The second approach leverages saliency maps in order to provide visual explanations through parts of images most impactful in the prediction model. A comparison of both methods on a skin cancer dataset shows similarities and differences between the two. The results show that the second approach gives us more understandable explanations, while the first one is more useful when trying to improve models' accuracy.

Language:English
Keywords:artificial intelligence, predictions, background knowledge, medicine, explainable AI
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2023
PID:20.500.12556/RUL-143700 This link opens in a new window
COBISS.SI-ID:137274883 This link opens in a new window
Publication date in RUL:09.01.2023
Views:415
Downloads:139
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:Slovenian
Title:Razlaga napovednih modelov v medicini z uporabo predznanja
Abstract:
Napovedni modeli so uporabni na mnogih področjih, saj zagotavljajo odločitve in prispevajo k razumevanju problema. V medicini se pogosto uporabljajo za napovedovanje bolezni, izbruhov nalezljivih bolezni, reakcij na zdravila itd. Raziskovalci si prizadevajo izboljšati modele, da bi dobili boljše napovedi in boljše razumevanje različnih fenomenov. Ker modeli globokega učenja veljajo za črne škatle, njihovih odločitev ni enostavno interpretirati. Izboljšave na tem področju bi bile zelo dobrodošle. V diplomskem delu sta prikazana dva pristopa k interpretaciji modelov. Prvi uporablja kontekstualno dekompozicijo, metodo, ki se ne osredotoča le na pomembnost posameznih atributov, ampak tudi na interakcije med njimi. S pristopom lahko razumemo kompleksne značilke in njihovo vlogo v modelih. Drugi pristop izkorišča pomembne značilke, da najde vizualne razlage, kateri deli slike so najbolj vplivni v modelu. Primerjava obeh metod na problemu kožnega raka pokaže podobnosti in razlike med obema. Rezultati kažejo, da nam drugi pristop daje bolj razumljive razlage, prvi pristop pa je bolj uporaben, ko poskušamo izboljšati natančnost modelov.

Keywords:umetna inteligenca, napovedni modeli, predznanje, medicina, razložljiva umetna inteligenca

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back