izpis_h1_title_alt

Spolna pristranost v jeziku in orodjih umetne inteligence
ID Supej, Anka (Author), ID Markič, Olga (Mentor) More about this mentor... This link opens in a new window, ID Pollak, Senja (Comentor)

.pdfPDF - Presentation file, Download (2,48 MB)
MD5: 5B248738BEBCA7E309084BCC77F96CEA

Abstract
Predmetno magistrsko delo predstavlja interdisciplinarni pristop k razumevanju spolne pristranosti, ki se pojavlja v rezultatih orodij umetne inteligence, ki temeljijo na jezikovnih modelih. Pristranosti in stereotipi postanejo problematični, ko sistematično izkazujemo nepravično pozitivno ali negativno nagnjenost do določene skupine, kar lahko vodi to hevristik in napačnega ter nepravičnega odločanja. Delo preko več korakov osvetljuje, kako do pristranosti v orodjih umetne inteligence sploh pride: najprej pojasni metode sistemov obdelave naravnega jezika ter modeliranje pomena besed preko vektorskih besednih vložitev. Nato se ukvarja s tem, kako se spolna pristranost manifestira v jeziku samem, ter kako jezik neke učne množice podatkov vpliva na rezultate besednih vložitev. Kot razloge za spolno pristranost v orodjih umetne inteligence poleg izbire učnih množic navedem še označevanje, predstavitev vhodnih podatkov, modele ter konceptualizacijo raziskav. Na podlagi lastnih študij, objavljenih med leti 2019 in 2021, naloga pokaže, na kakšne načine orodja obdelave naravnega jezika izkazujejo spolno pristranost. Magistrska naloga se s spolno pristranostjo ukvarja tudi z vidika interakcije človeka s pristranimi orodji. Antropomorfizacija in navidezna objektivnost umetne inteligence predstavljata dva izmed glavnih negativnih vplivov tovrstnih orodij na človekovo odločanje. Ključni doprinos naloge je preko širokega pregleda literature virov jezikoslovja, računalništva, filozofije in drugih ved predlagati več smernic, ki bi lahko zmanjšale spolno pristranost v orodjih, kot so veliki jezikovni modeli. Spolno pristranost je potrebno natančno definirati s pomočjo družboslovnih ved. Orodja umetne inteligence morajo hkrati zadoščati visokim etičnim standardom, biti vključevalna in upoštevati zastopanost različnih izkušenj. Za razvijalce orodij je potrebno vzpostaviti jasen okvir odgovornosti. Podjetja se morajo zavezati k nenehnim izboljšavam orodij in transparentnosti, četudi jim te finančno ne koristijo. Kljub koristim, ki jih lahko imamo od orodja, ki posnema človeka, menim, da je dobro, da se razvijalci zavežejo k zmanjševanju antropomorfizacije orodij, saj le-ta lahko neželeno vpliva na človekovo odločanje in s tem posega v njegovo avtonomijo. S tem, ko orodja sama ustvarjajo besedila, ta pa uporabimo kot učno množico za nadaljnja orodja, ustvarjamo »fenomen povratne zanke«. Ena izmed smernic naloge je tovrsten efekt preprečiti. Nazadnje naloga predlaga vodilno vlogo, ki naj jo ima izobrazba na področju umetne inteligence in spolne pristranosti, saj lahko opolnomoči ljudi tako pri uporabi orodij kot pri življenju v novi realnosti.

Language:Slovenian
Keywords:spolna pristranost, jezikovna pristranost, spolni stereotipi, umetna inteligenca, veliki jezikovni modeli, vektorske vložitve, antropomorfizacija, animizem, transparentnost
Work type:Master's thesis/paper
Organization:FF - Faculty of Arts
Year:2024
PID:20.500.12556/RUL-155014 This link opens in a new window
Publication date in RUL:14.03.2024
Views:547
Downloads:269
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Gender bias in language and artificial intelligence tools
Abstract:
This master's thesis represents an interdisciplinary approach to understanding gender bias manifested in the output of artificial intelligence tools, which are based on language models. Biases and stereotypes become problematic when we systematically exhibit unfair positive or negative bias towards a particular group, leading to heuristics and wrong and unfair decision-making. This work sheds light on how bias in AI tools arises in the first place through a series of steps: first, it explains the methodologies of natural language processing systems and the modelling of word meaning through word embeddings. It then examines how gender bias manifests in the language and how language present in a given training dataset can influence the results of word embeddings. In addition to the choice of training datasets, the work also mentions labelling, input data representation, models and research conceptualisation as reasons for gender bias in AI tools. Based on our own studies, published between 2019 and 2021, the thesis shows concrete ways gender bias manifests itself in natural language processing tools. The master’s thesis also deals with gender bias from the perspective of human interaction with biased AI tools. Anthropomorphisation and the apparent objectivity of artificial intelligence are one of the main ways in which such tools can negatively influence human decision-making. The main contribution of this thesis is to propose, through a broad literature review of sources from linguistics, computer science, philosophy and other disciplines, as well as our own studies, several guidelines that could reduce gender bias in tools such as large language models. Firstly, gender bias needs to be precisely defined with the help of social sciences. At the same time, AI tools must meet high ethical standards, be inclusive and start representing different experiences. A clear framework of accountability for tool developers needs to be established. Companies must commit to continuous improvement of tools and to transparency, even if these do not benefit the company financially. Despite the usefulness of tools that mimic humans, I believe it would be beneficial for developers to commit to reducing the anthropomorphisation of tools, since the latter can undesirably influence decision-making and thus interfere with personal autonomy. One of the guidelines proposed in the paper is the prevention of the so-called »feedback loop phenomenon« that occurs when AI-generated texts are fed as training dataset for further tools. Lastly, the master’s thesis proposes to introduce education in AI and gender bias, which can empower people both in their use of tools and as individuals living in the new reality.

Keywords:gender bias, language bias, gender stereotypes, artificial intelligence, large language models, word embeddings, anthropomorphisation, animism, transparency

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back