izpis_h1_title_alt

Medjezikovno povzemanje besedil
ID Pečovnik, Žan (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (557,11 KB)
MD5: 5FA50AC6990083898213A97CA94F7EB0

Abstract
Medjezikovno povzemanje besedil je proces generiranja povzetka besedila v drugem jeziku in predstavlja eno izmed manj raziskanih področij obdelave naravnega jezika, saj je večina raziskav osredotočenih zgolj na angleški jezik. Razvili smo tri modele, zmožne direktnega povzemanja iz slovenskega v angleški jezik, ki temeljijo na vnaprej naučenih modelih LongT5, PEGASUS-X in BigBird. Za učenje smo uporabili podatkovno množico KAS 2.0, ki vsebuje 52351 slovenskih akademskih del in pripadajočih angleških povzetkov. Naredili smo več eksperimentov, kjer smo za prilagajanje modelov uporabili različne deleže učne množice. Modele smo kvantitativno evalvirali z metrikama ROUGE-L in BLEURT ter ugotovili, da se je najbolje izkazal model LongT5, zelo se mu je približal še model PEGASUS-X. Model BigBird je bil pri metriki BLEURT slabši za približno 8%, medtem ko je bil pri drugih metrikah primerljiv z ostalima modeloma. Ročno smo kvalitativno evalvirali 30 generiranih povzetkov za vsak model in jih klasificirali kot dobre oz. slabe. Model LongT5 je generiral tri dobre povzetke, PEGASUS-X enega, BigBird pa nobenega.

Language:Slovenian
Keywords:medjezikovno povzemanje besedil, obdelava naravnega jezika, arhitektura transformer, vnaprej naučeni jezikovni modeli, model LongT5, model PEGASUS-X, model BigBird, metrika BLEURT, metrika ROUGE-L
Work type:Master's thesis/paper
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-164918 This link opens in a new window
Publication date in RUL:15.11.2024
Views:43
Downloads:6
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Cross-lingual text summarization
Abstract:
Cross-lingual text summarization is the process of generating a summary of a text in a foreign language and is a less-researched area of natural language processing, since the majority of the research focuses only on the English language. We developed three different models capable of direct summarization from Slovene to English, based on pre-trained models LongT5, PEGASUS-X and BigBird. For training we used the KAS 2.0 dataset, which contains 52351 Slovene academic works and their corresponding English summaries. We conducted multiple experiments with fine-tuning the models using different portions of the training dataset. The models were quantitatively evaluated using the ROUGE-L and BLEURT metrics, and the LongT5 model performed best, closely followed by the PEGASUS-X model. The BigBird model performed approximately 8% worse according to the BLEURT metric, while it was comparable to the other models on other metrics. We manually qualitatively evaluated 30 generated summaries for each model and classified them as good or bad. The LongT5 model generated three good summaries, PEGASUS-X one, and BigBird none.

Keywords:cross-lingual text summarization, natural language processing, transformer architecture, pre-trained language models, LongT5 model, PEGASUS-X model, BigBird model, BLEURT metric, ROUGE-L metric

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back