izpis_h1_title_alt

Odkrivanje stereotipov v velikih slovenskih jezikovnih modelih tipa BERT : magistrsko delo
ID Gornik, Maja (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (30,93 MB)
MD5: B6BE1DBAFA751075DEB87AB39654F371

Abstract
Veliki vnaprej naučeni jezikovni modeli so zaradi svoje uspešnosti trenutno eden glavnih pristopov k obdelavi naravnega jezika. Ti modeli so naučeni na obsežnih zbirkah besedil, ki vsebujejo različne oblike pristranosti in stereotipov. V okviru magistrske naloge analiziramo, kako se te pristranosti in stereotipi odražajo v naučenih jezikovnih modelih za slovenski jezik, zlasti v modelu SloBERTa, ki je naučen pretežno na standardnem jeziku. Za odkrivanje stereotipov uporabimo jezikovni model za napovedovanje manjkajočih besed v stavkih, ki opisujejo značilnosti posameznih družbenih skupin. S tem pristopom identificiramo značilne lastnosti za širok nabor družbenih skupin, analiziramo njihov sentiment in rezultate statistično ovrednotimo. Rezultati analize kažejo, da obstajajo statistično značilne razlike med obravnavanimi skupinami glede na določen sentiment. Izkaže se tudi, da modeli odražajo številne negativne stereotipe, ki so bolj izraziti pri modelih naučenih na pogovornem jeziku, kot sta modela SloBERTa-SlEng in SlEng-BERT. Delo prispeva k boljšemu razumevanju delovanja velikih jezikovnih modelov in izzivov, povezanih z njihovo uporabo v praksi. Poleg tega ima potencial za uporabo pri odkrivanju družbenih stereotipov in stališč do posameznih socialnih skupin.

Language:Slovenian
Keywords:veliki vnaprej naučeni jezikovni modeli, model BERT, model SloBERTa, stereotipi
Work type:Master's thesis/paper
Typology:2.09 - Master's Thesis
Organization:FMF - Faculty of Mathematics and Physics
FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-154462 This link opens in a new window
COBISS.SI-ID:184907267 This link opens in a new window
Publication date in RUL:16.02.2024
Views:648
Downloads:116
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Detecting stereotypes in large BERT-based Slovenian language models
Abstract:
Due to their effectiveness, large pre-trained language models are currently one of the main approaches to natural language processing. These models are trained on extensive collections of texts that contain various forms of biases and stereotypes. In this thesis, we explore how these biases and stereotypes are reflected in pre-trained language models for the Slovenian language, particularly in the SloBERTa model, pretrained mostly on standard language. To detect stereotypes, we employ a language model to predict missing words in sentences describing the characteristics of individual social groups. Using this approach, we identify salient attributes for a wide range of social groups, analyse their sentiment and statistically evaluate the results. The results of the analysis indicate statistically significant differences among the considered groups in terms of detected sentiment. It also turns out that the models reflect numerous negative stereotypes, which are more pronounced in models trained on colloquial language, such as SloBERTa-SlEng and SlEng-BERT. This work contributes to a better understanding of how large-scale language models work and the challenges associated with their use in practice. In addition, it has the potential to be used to uncover social stereotypes and attitudes towards individual social groups.

Keywords:large pre-trained language models, BERT model, SloBERTa model, stereotypes

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back