izpis_h1_title_alt

Razvoj gradnikov za analizo pristranosti v programu Orange
ID Mervič, Žan (Author), ID Zupan, Blaž (Mentor) More about this mentor... This link opens in a new window, ID Toplak, Marko (Comentor)

.pdfPDF - Presentation file, Download (1,24 MB)
MD5: C1540FACA1B34BEADD064C78D5DF44FD

Abstract
Umetna inteligenca in strojno učenje se vse pogosteje uporabljata v procesih odločanja, kot so zaposlovanje, izrekanje kazni na sodiščih ali odobritev posojil. Ker te odločitve pomembno vplivajo na posameznike, modeli odločanja ne smejo biti pristrani do določenih demografskih skupin. V diplomski nalogi smo preizkusili uporabo različnih algoritmov odstranjevanja in odkrivanja pristranosti v podatkih in napovedih modelov. Rezultati kažejo, da so metode, ki smo jih preučevali za zmanjšanje pristranosti, učinkovite in relativno enostavne za uporabo. Glavni rezultat naloge je razširitev za okolje Orange, ki vključuje gradnike za obravnavanje pristranosti. Razvite gradnike lahko uporabljajo tudi posamezniki brez znanja programiranja. Razširitev je prosto dostopna na repozitoriju GitHub (programska koda) in v programskem okolju Orange. O implementaciji smo poročali tudi na spletnih straneh orodja Orange.

Language:Slovenian
Keywords:pristranost, pravičnost, strojno učenje, vizualno programiranje, Orange
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-161046 This link opens in a new window
COBISS.SI-ID:211187971 This link opens in a new window
Publication date in RUL:06.09.2024
Views:169
Downloads:29
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Development of components for bias analysis in Orange
Abstract:
Artificial intelligence and machine learning are increasingly being used in decision-making processes such as hiring, sentencing, and credit approval. Given the significant impact these decisions have on people's lives, the models that drive them must be free of bias against any demographic group. In this thesis, we tested several algorithms designed to detect and mitigate bias in data and model predictions. The results show that the methods we tested are both effective and relatively easy to use. One of the main results of this work is an add-on developed for the Orange environment to help users without programming skills to address and understand bias in machine learning. This add-on is available on GitHub, where the source code is available, and has been integrated into the Orange programming environment. We reported on the implementation on the Orange tool website.

Keywords:bias, fairness, machine learning, visual programming, Orange

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back