izpis_h1_title_alt

Medjezikovni prenos napovednih modelov za sovražni govor
ID PEČOVNIK, ŽAN (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window, ID Ljubešić, Nikola (Co-mentor)

.pdfPDF - Presentation file, Download (513,42 KB)
MD5: 1A9BCD0F60E9750FD39FE9B7EC6FA8B9

Abstract
Z razvojem družbenih omrežij je narasla pogostost sovražnega govora v upo- rabniških vsebinah. Osredotočili se bomo na dve trenutno najbolj aktualni temi, LGBT in migrante. Za napovedovanje sovražnega govora bomo upo- rabili nevronsko mrežo BERT in naredili primerjavo med večjezikovnim mo- delom, ki je naučen na 104 različnih jezikih ter trojezikovnim modelom, ki je naučen na slovenščini, hrvaščini in angleščini. Ugotovili smo, da trojezikovni model za približno 5% natančneje napoveduje sovražni govor na jeziku, na katerem je bil model tudi naučen. Večjezikovni model, brez ali z dodatnim učenjem, natančneje kot trojezikovni model napoveduje sovražni govor na jezikih, na katerem prvotno model ni bil naučen. To kaže na boljši medjezi- kovni prenos večjezikovnega napovednega modela.

Language:Slovenian
Keywords:sovražni govor, model BERT, nevronske mreže, medjezikovni prenos, strojno učenje, obdelava naravnega jezika.
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2020
PID:20.500.12556/RUL-116665 This link opens in a new window
COBISS.SI-ID:18357251 This link opens in a new window
Publication date in RUL:01.06.2020
Views:1236
Downloads:284
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Cross-lingual transfer of hate speech prediction models
Abstract:
With the development of social networks, there has been a signicant in- crease of hate speech in user generated contents. We focus on two most discussed topics, LGBT and migrants. We use the BERT neural network for prediction of hate speech and make a comparison between the multilingual model, trained on 104 dierent languages, and a trilingual model, trained on Slovene, Croatian and English. Results show that the trilingual model is ap- proximately 5% more accurate predicting hate speech on a language that it was trained on. The multilingual model with or without additional training is more accurate on languages that it was not trained on. This indicates a better cross-lingual transfer of multilingual model.

Keywords:hate speech, BERT model, neural networks, cross-lingual trans- fer, machine learning, natural language processing.

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back