izpis_h1_title_alt

Poenostavljanje besedil v slovenščini z velikimi jezikovnimi modeli
ID Bone, Blaž (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (403,86 KB)
MD5: 47DD92C39863622A81B991D603A9A146

Abstract
V diplomski nalogi smo raziskali poenostavljanje besedil v slovenščini z uporabo velikih jezikovnih modelov. Cilj naloge je bil razviti modele, ki lahko učinkovito poenostavijo slovenska besedila. Uporabili smo obstoječe angleške učne množice, jih strojno prevedli v slovenščino, nato na teh podatkih naučili modele, kot so SloT5, mT5 in mBART. Izvedli smo kvantitativno in kvalitativno analizo rezultatov, pri čemer smo uporabili metrike, kot so BLEU, SARI, BERTScore in LaBSE Similarity. Rezultati so pokazali, da so modeli uspešno poenostavili besedila, ohranili ključne informacije in smiselno poenostavili strukturo in jezik. Kljub uspešnim poenostavitvam so modeli pogosto ponovili izvirne povedi brez večjih sprememb.

Language:Slovenian
Keywords:obdelava naravnega jezika, poenostavljanje besedila, strojno učenje, veliki jezikovni modeli
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-160702 This link opens in a new window
COBISS.SI-ID:209156355 This link opens in a new window
Publication date in RUL:03.09.2024
Views:200
Downloads:35
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Text simplification for Slovene using large language models
Abstract:
In this thesis, we explored text simplification in Slovene using large language models. The goal of the thesis was to develop models that can effectively simplify Slovene texts. We used existing English training datasets, which we machine-translated into Slovene, and then trained models such as SloT5, mT5, and mBART on these data. We conducted quantitative and qualitative analysis of the results, using metrics such as BLEU, SARI, BERTScore, and LaBSE Similarity. The results showed that the models can successfully simplify texts, retain key information, and meaningfully simplify the structure and language. Despite the successful simplifications, the models often repeat the original sentences without significant changes.

Keywords:natural language processing, text simplification, machine learning, large language models

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back