izpis_h1_title_alt

Razlaga napovedi strojnega učenja z biološkim predznanjem
ID DROFENIK, KLARA (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window, ID Škrlj, Blaž (Comentor)

.pdfPDF - Presentation file, Download (428,65 KB)
MD5: 2E160D25A499A95ADA69D6A25548DC78

Abstract
Za naključne gozdove, nevronske mreže in ostale kompleksnejše modele strojnega učenja je težko povedati, zakaj so podali dano napoved. Težavo rešujejo algoritmi, ki poskušajo razložiti vpliv atributov na napoved ciljne spremenljivke. Eden takšnih je algoritem SHAP, ki glede na vrednost atributa poda oceno, kako ta vpliva na napoved modela. Naš cilj je preveriti, kako se razlage SHAP ujemajo s predznanjem. Na več podatkovnih množicah proteinov smo zgradili napovedne modele z metodo XGBoost in ga razložili z algoritmom SHAP. Preverili smo, ali med proteini, ki so pomembni za napovedi modela, obstajajo raziskane interakcije, s pomočjo katerih bi lahko preverili uspešnost algoritma SHAP za iskanje interakcij. Rezultati so se razlikovali glede števila najdenih interakcij za različne učne množice in baze znanja. Naša raziskava nakazuje potencialno uporabnost algoritma SHAP za iskanje interakcij.

Language:Slovenian
Keywords:metoda razlage SHAP, razlaga napovednih modelov, biološko predznanje, napovedni model XGBoost
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
FMF - Faculty of Mathematics and Physics
Year:2021
PID:20.500.12556/RUL-125528 This link opens in a new window
COBISS.SI-ID:58202115 This link opens in a new window
Publication date in RUL:23.03.2021
Views:1234
Downloads:191
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Explanation of machine learning predictions using biological background knowledge
Abstract:
Decisions of complex machine learning algorithms such as random forest and neural networks are difficult to explain. This problem can be addressed with perturbation-based algorithms, such as SHAP, which assigns credit for prediction to individual attribute values. Our goal was to check if the output of SHAP matches the background knowledge. We used the XGBoost model on several data sets, where attributes are proteins, and explained the model with SHAP algorithm. We checked if there are known biological interactions between proteins, which SHAP marks as important. The method could turn SHAP into interaction discovery algorithm. Obtained numbers of interactions differ based on the chosen data set and knowledge base. Our research hints at potential usefulness of explanation algorithm for finding interactions.

Keywords:explanation method SHAP, explanation, biological background, prediction model XGBoost

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back