izpis_h1_title_alt

Logično sklepanje v naravnem jeziku za slovenščino z velikimi jezikovnimi modeli
ID KMECL, TIM (Author), ID Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (916,14 KB)
MD5: AF778512AB8001B5CF7514370D470801

Abstract
Na področju strojnega razumevanja naravnega jezika so v zadnjih letih najuspešnejši veliki jezikovni modeli. Pomemben problem s tega področja je logično sklepanje v naravnem jeziku, za reševanje katerega potrebujejo modeli tudi poznavanje resničnega sveta, strojno generiranje razlag sklepov pa nam omogoča dodaten vpogled v njihovo delovanje. V diplomskem delu smo preizkusili različne pristope za logično sklepanje v naravnem jeziku za slovenščino. Uporabili smo dva slovenska velika jezikovna modela, SloBERTa in SloT5 in mnogo večji angleški jezikovni model GPT-3.5-turbo. Za učenje modelov smo uporabili slovensko podatkovno množico SI-NLI, strojno pa smo prevedli še 50.000 primerov iz angleške množice ESNLI. Model SloBERTa smo prilagodili na obeh množicah. Model SloBERTa, prilagojen na SI-NLI, doseže na testni množici SI-NLI klasifikacijsko točnost 74,4 %. Z vnaprejšnjim učenjem na prevodih ESNLI smo točnost izboljšali na 75,3 %. Ugotovili smo, da modeli delajo drugačne vrste napak kot ljudje in da slabo posplošujejo med različnimi domenami primerov. SloT5 smo na množici ESNLI prilagodili za generiranje razlag pri logičnem sklepanju. Ustreznih je manj kot tretjina razlag, pri čemer se model dobro nauči pogostih stavčnih oblik v razlagah, večinoma pa so pomensko nesmiselne. Sklepamo, da so slovenski veliki jezikovni modeli z nekaj sto milijoni parametrov zmožni iskanja in uporabe jezikovnih vzorcev, poznavanje jezika pa ni povezano s poznavanjem resničnosti. Za klasificiranje in generiranje razlag smo uporabili še večji model GPT-3.5-turbo. Pri učenju brez dodatnih primerov doseže na testni množici SI-NLI točnost 56,5 %, pri pravilno klasificiranih primerih pa je ustreznih 81 % razlag. V primerjavi z manjšimi slovenskimi modeli kaže dokaj dobro razumevanje resničnosti, pri čemer ga omejuje slabše poznavanje slovenščine.

Language:Slovenian
Keywords:logično sklepanje v naravnem jeziku, veliki jezikovni modeli, arhitektura Transformer, SloBERTa, SloT5, GPT-3.5-turbo, ChatGPT, slovenščina, prilagajanje modelov
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
FMF - Faculty of Mathematics and Physics
Year:2023
PID:20.500.12556/RUL-149573 This link opens in a new window
COBISS.SI-ID:165503747 This link opens in a new window
Publication date in RUL:07.09.2023
Views:1226
Downloads:191
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Natural language inference for Slovene using large language models
Abstract:
In recent years, large language models have been the most successful approach to natural language processing. An important problem in this field is natural language inference, which requires models to understand the real world to some degree. Requiring models to explain their reasoning offers us additional insight into their functioning. We tested several approaches for natural language inference in Slovene. We used two Slovene large language models, SloBERTa and SloT5, as well as much larger English model GPT-3.5-turbo. Training data consisted of Slovene dataset SI-NLI and additional 50,000 machine-translated samples from English dataset ESNLI. SloBERTa model was fine-tuned on both datasets. Fine-tuning it on SI-NLI achieves classification accuracy of 74.4 % on the SI-NLI test set. Pretraining it on ESNLI improves its accuracy to 75.3 %. We observe that models make different types of errors compared to humans and that they generalize poorly across different datasets. SloT5 was fine-tuned on ESNLI to generate explanations for natural language inference samples. Less than a third of explanations were appropriate, with the model learning common sentence patterns from the domain, producing semantically meaningless explanations. We assume that Slovene large language models with several hundred million parameters are capable of identifying and using language patterns, but language understanding is not inherently tied to understanding of reality. Even larger GPT-3.5-turbo was used both for classification and explanation generation. It achieves an accuracy of 56.5 % on SI-NLI test set using zero-shot learning, with 81 % explanations being appropriate for the correctly classified samples. In comparison with smaller Slovene models, this model shows a reasonably good understanding of reality, but is limited by its lesser Slovene proficiency.

Keywords:natural language inference, large language models, Transformer architecture, SloBERTa, SloT5, GPT-3.5-turbo, ChatGPT, Slovene, fine-tuning

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back