izpis_h1_title_alt

Globoke nevronske mreže za postavljanje vejic v slovenskem jeziku
BOŽIČ, MARTIN (Author), Robnik Šikonja, Marko (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (3,14 MB)

Abstract
Med najpogostejše napake pri pisanju besedil v slovenščini sodi postavljanje vejic. V diplomski nalogi se bomo osredotočili na postavljanje vejic s pomočjo globokih nevronskih mrež. Predstavili bomo dve arhitekturi, eno na podlagi nevronskih mrež s celicami GRU in drugo z vnaprej naučenim jezikovnim modelom tipa BERT. Pri uporabi jezikovnega modela tipa BERT opazimo boljšo klasifikacijsko točnost. Vzrok za to je boljša in kompleksnejša arhitektura modela ter proces učenja, ki izpopolnjuje model z obširnim jezikovnim znanjem. Z uporabo večjezičnega modela BERT, naučenega na 104 jezikih in le manjšo množico slovenskih besedil, pridobimo rešitev, ki je primerljiva z rešitvijo, ki smo jo pridobili z uporabo trojezičnega, slovensko-hrvaško-angleškega modela BERT.

Language:Slovenian
Keywords:globoke nevronske mreže, mreže GRU, model BERT, postavljanje vejic
Work type:Bachelor thesis/paper (mb11)
Organization:FRI - Faculty of computer and information science
Year:2020
COBISS.SI-ID:27670787 Link is opened in a new window
Views:182
Downloads:122
Metadata:XML RDF-CHPDL DC-XML DC-RDF
 
Average score:(0 votes)
Your score:Voting is allowed only to logged in users.
:
Share:AddThis
AddThis uses cookies that require your consent. Edit consent...

Secondary language

Language:English
Title:Deep neural networks for comma placement in Slovene
Abstract:
Comma placement is the most frequent orthological mistake in Slovene. The thesis focuses on comma placement using deep neural networks. We present two architectures, one based on neural networks with GRU cells and another using a pre-learned BERT language model. Using a pre-learned BERT language model, we get better classification accuracy. The reason for this is better and more complex architecture and the learning process, which fine-tuned a pretrained model with substantial language knowladge. With the multilingual BERT, trained on 104 languages with only a small amount of Slovene texts, we achieve comparable results to Slovene-Croatian-English BERT model, trained with much more Slovene texts.

Keywords:deep neural networks, GRU networks, BERT model, comma placement

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Comments

Leave comment

You have to log in to leave a comment.

Comments (0)
0 - 0 / 0
 
There are no comments!

Back