izpis_h1_title_alt

Medjezikovni prenos napovednih modelov za sovražni govor
PEČOVNIK, ŽAN (Author), Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window, Ljubešić, Nikola (Co-mentor)

.pdfPDF - Presentation file, Download (513,42 KB)

Abstract
Z razvojem družbenih omrežij je narasla pogostost sovražnega govora v upo- rabniških vsebinah. Osredotočili se bomo na dve trenutno najbolj aktualni temi, LGBT in migrante. Za napovedovanje sovražnega govora bomo upo- rabili nevronsko mrežo BERT in naredili primerjavo med večjezikovnim mo- delom, ki je naučen na 104 različnih jezikih ter trojezikovnim modelom, ki je naučen na slovenščini, hrvaščini in angleščini. Ugotovili smo, da trojezikovni model za približno 5% natančneje napoveduje sovražni govor na jeziku, na katerem je bil model tudi naučen. Večjezikovni model, brez ali z dodatnim učenjem, natančneje kot trojezikovni model napoveduje sovražni govor na jezikih, na katerem prvotno model ni bil naučen. To kaže na boljši medjezi- kovni prenos večjezikovnega napovednega modela.

Language:Slovenian
Keywords:sovražni govor, model BERT, nevronske mreže, medjezikovni prenos, strojno učenje, obdelava naravnega jezika.
Work type:Bachelor thesis/paper (mb11)
Organization:FRI - Faculty of computer and information science
Year:2020
COBISS.SI-ID:18357251 Link is opened in a new window
Views:362
Downloads:157
Metadata:XML RDF-CHPDL DC-XML DC-RDF
 
Average score:(0 votes)
Your score:Voting is allowed only to logged in users.
:
Share:AddThis
AddThis uses cookies that require your consent. Edit consent...

Secondary language

Language:English
Title:Cross-lingual transfer of hate speech prediction models
Abstract:
With the development of social networks, there has been a signicant in- crease of hate speech in user generated contents. We focus on two most discussed topics, LGBT and migrants. We use the BERT neural network for prediction of hate speech and make a comparison between the multilingual model, trained on 104 dierent languages, and a trilingual model, trained on Slovene, Croatian and English. Results show that the trilingual model is ap- proximately 5% more accurate predicting hate speech on a language that it was trained on. The multilingual model with or without additional training is more accurate on languages that it was not trained on. This indicates a better cross-lingual transfer of multilingual model.

Keywords:hate speech, BERT model, neural networks, cross-lingual trans- fer, machine learning, natural language processing.

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Comments

Leave comment

You have to log in to leave a comment.

Comments (0)
0 - 0 / 0
 
There are no comments!

Back