izpis_h1_title_alt

Primerjava razlag napovedi nevronskih mrež na mamografskih slikah
ID Koren, Aljoša (Author), ID Sadikov, Aleksander (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (6,47 MB)
MD5: C2986B335D16EDD442C1AEAB7EF730D3

Abstract
Z vse večjim napredkom na področju umetne inteligence in naraščajočimi zbirkami medicinskih podatkov se odpirajo vrata novim metodam, ki bi lahko zdravnikom pomagale pri diagnosticiranju in s tem povečale odkrivanje bolezenskih stanj ter hkrati razbremenile zdravstvene delavce. Kljub uspešnim modelom za klasifikacijo medicinskih podatkov ostaja težava transparentnost teh modelov. Modeli globokih nevronskih mrež vsebujejo veliko število parametrov in so težko razložljivi. Na področju medicine ima lahko napačna odločitev dolgotrajne in resne posledice, zato je zaupanje v napoved modelov še pomembnejša. Čeprav nekateri modeli dosegajo odlične rezultate, vanje ne moremo slepo zaupati, ne da bi prej razumeli, zakaj se je model odločil za podano diagnozo. V magistrskem delu smo se osredotočili na področje mamografskih slik in naučili modele, ki so slike diagnosticirali kot zdrave ali rakave. Modeli so dosegli vrednosti površine pod krivuljo ROC nad 0,90. Nato smo preučili napovedi in razlage različnih modelov na nekaj testnih mamografskih slikah. Pri pregledovanju rezultatov smo opazili razlike v razlagah med različnimi metodami razlage za iste modele ter tudi med različnimi arhitekturami, kljub isti diagnozi. Metode, ki smo jih uporabili za razlage, smo ocenili kot neuporabne na področju mamografije.

Language:Slovenian
Keywords:Globoko učenje, razložljiva umetna inteligenca, analiza medicinskih slik
Work type:Master's thesis/paper
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-164946 This link opens in a new window
Publication date in RUL:18.11.2024
Views:41
Downloads:18
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:A comparison of Prediction Explanations of Neural Networks on Mammographic Images
Abstract:
With the increasing advancements in artificial intelligence and the growing collections of medical data, new methods are emerging that could help doctors with diagnosis. This could help improve the detection of diseases and decrease the workload of healthcare professionals. Despite the success of some models in classifying medical data, the issue of transparency in these models remains. Deep neural network models contain a large number of parameters and are difficult to interpret. In the field of medicine, incorrect decisions can have long-lasting and severe consequences, making trust in the model's predictions even more critical than in other fields. Although some models achieve excellent results, we cannot blindly trust them without first understanding why the model decided for a given diagnosis. In this master's thesis, we focused on the area of mammography images and trained models to diagnose images as healthy or cancerous. The models achieved AUC values under the ROC curve above 0.90. We then examined the results and explanations of different models on several test mammography images. During the review of the results, we observed differences in the explanations provided by various interpretability methods for the same models, as well as for different architectures, despite the same diagnosis. The methods used for explanations in this study were found to be ineffective in the field of mammography.

Keywords:Deep learning, explainable artificial intelligence, medical image analysis

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back