izpis_h1_title_alt

Uporaba razlag v velikih jezikovnih modelih za prepoznavanje sarkazma v slovenščini
ID Jug, Gašper (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (381,57 KB)
MD5: F807CFB3B0C4E6816756D40CB68F4F59

Abstract
Strojna obdelava naravnega jezika je v zadnjih letih zelo napredovala. Pomemben del je tudi zaznavanje sarkazma, ki nam omogoča boljše razumevanje sentimenta besedil. Problem strojnega zaznavanja sarkazma v slovenščini je malo raziskan. V tej nalogi raziščemo zaznavanje sarkazma v slovenščini s pomočjo enojezičnih slovenskih (SloBERTa) in večjezičnih (Llama3 8B in GPT-4o) velikih jezikovnih modelov. Uporabili smo množici Reddit komentarjev in naslovov člankov v angleščini, ki smo ju najprej strojno prevedli v slovenščino. Uporabili smo prilagajanje modelov in gradnjo ukaznih pozivov in preverili njihovo uspešnost zaznave sarkazma. Uspešnost modelov smo poskusili izboljšati z dodajanjem razlag primerom. Najboljše rezultate smo dosegli s prilagajanjem modelov z modelom Llama3 8B, sledil pa mu je model SloBERTa. Rezultati modela GPT-4o z gradnjo ukaznega poziva so bili nekoliko slabši. Z dodajanjem razlag smo jih opazno izboljšali, vendar pa niso presegli prilagojenih modelov. Rezultati modela Llama3 8B z gradnjo ukaznega poziva so bili komaj kaj boljši od naključnega ugibanja, z dodajanjem razlag pa smo dosegli minimalne izboljšave.

Language:Slovenian
Keywords:obdelava naravnega jezika, jezikovni modeli, prepoznavanje sarkazma
Work type:Bachelor thesis/paper
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-164985 This link opens in a new window
Publication date in RUL:19.11.2024
Views:31
Downloads:2
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Using explanations and large language models for sarcasm detection in Slovene
Abstract:
Natural language processing (NLP) has advanced significantly in recent years. An important part of NLP is sarcasm detection which enables better understanding of sentiment in texts. There is little research on sarcasm detection in Slovene. In this thesis we explore sarcasm detection in Slovene using monolingual Slovene (SloBERTa) and multilingual (Llama3 8B and GPT-4o) large language models. We used two datasets in English, namely the Reddit comment and news headlines datasets, which we first translated using machine translation. We used fine-tuning and prompt engineering and evaluated their success rate. We also tried to improve the models' success rate by adding explanations to our examples. The best results were achieved through fine-tuning with Llama3 8B, followed by SloBERTa. Results from prompt engineering with GPT-4o were slightly worse. While adding explanations led to a significant improvement of GPT-4o, the results did not surpass those obtained by fine-tuning the significantly smaller SloBERTa model and Llama3 8B model. The results with Llama3 8B using prompt engineering were only slightly better than random guessing and adding explanations provided minimal improvements.

Keywords:natural language processing, language models, sarcasm detection

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back