izpis_h1_title_alt

Prilagajanje vnaprej naučenega modela BERT slovenskim klasifikacijskim nalogam
BOMBEK, MIHA (Author), Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (372,81 KB)

Abstract
Za reševanje nalog na področju obdelave besedil so trenutno najbolj uspešni modeli arhitekture transformer, kot je vnaprej naučen model BERT. Pri prilagajanju predhodno naučenega modela za specifično nalogo ponavadi prilagodimo vse parametre modela. V delu preučujemo metode prilagajanja modela BERT, ki prilagodijo le manjši del parametrov. Analiziramo rezultate pri reševanju klasifikacijskih nalog v slovenščini. Prilagajamo večjezikovna modela CroSloEngual BERT in mBERT na nalogah prepoznavanja imenskih entitet in označevanja univerzalnih besednih vrst. Uporabimo štiri različne metode prilagajanja: prilagajanje celotnega modela, prilagajanje le zadnje plasti, prilagajanje z adapterjem in prilagajanje z metodo združevanja adapterjev. Pokažemo, da prilagajanje z adapterjem, kljub majhnemu številu prilagojenih parametrov, dosega dobre rezultate in da lahko z združevanjem adapterjev dosežemo tudi boljše rezultate kot pri prilagajanju celotnega modela. Ugotovimo, da je metoda združevanja adapterjev koristnejša pri klasifikacijskih nalogah višjega nivoja. Slabost te metode je čas učenja, saj je celoten postopek združevanja adapterjev lahko dolgotrajen.

Language:Slovenian
Keywords:strojno učenje, obdelava naravnega jezika, model BERT, klasifikacijska naloga, prilagajanje z združevanjem adapterjev
Work type:Bachelor thesis/paper (mb11)
Organization:FRI - Faculty of computer and information science
Year:2021
COBISS.SI-ID:51809539 Link is opened in a new window
Views:121
Downloads:56
Metadata:XML RDF-CHPDL DC-XML DC-RDF
 
Average score:(0 votes)
Your score:Voting is allowed only to logged in users.
:
Share:AddThis
AddThis uses cookies that require your consent. Edit consent...

Secondary language

Language:English
Title:Fine-tuning pretrained BERT model for Slovene classification tasks
Abstract:
Transformer based models, such as pretrained BERT model, are currently the most successful approach to text processing tasks. When tuning BERT for a specific task, we usually fine-tune all the model's parameters. We investigate methods for fine-tuning BERT models, which fine-tune only a fraction of parameters for a specific task. We analyze results on Slovene classification tasks. We fine-tune multilingual models CroSloEngual BERT and mBERT on named entity recognition and UPOS tagging. We compare four fine-tuning methods: full model fine-tuning, tuning only the classification head, adapter tuning, and AdapterFusion fine-tuning. We show that adapter tuning achieves good results, despite the small number of tuned parameters, and that AdapterFusion tuning can achieve better results than full model fine-tuning. We discover that AdapterFusion tuning is more beneficial when solving higher level classification tasks. The downside of this method is that it is time consuming.

Keywords:machine learning, natural language processing, BERT model, classification task, AdapterFusion fine-tuning

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Comments

Leave comment

You have to log in to leave a comment.

Comments (0)
0 - 0 / 0
 
There are no comments!

Back