izpis_h1_title_alt

Generative model for less-resourced language with 1 billion parameters
ID Vreš, Domen (Author), ID Božič, Martin (Author), ID Potočnik, Aljaž (Author), ID Martinčič, Tomaž (Author), ID Robnik Šikonja, Marko (Author)

.pdfPDF - Presentation file, Download (626,28 KB)
MD5: A0A044C23F57A6991DF3139236A00FE9
URLURL - Source URL, Visit https://zenodo.org/records/13912515 This link opens in a new window

Abstract
Large language models (LLMs) are a basic infrastructure for modern natural language processing. Many commercial and open-source LLMs exist for English,e.g., ChatGPT, Llama, Falcon, and Mistral. As these models are trained on mostly English texts, their fluency and knowledge of low-resource languages and societies are superficial. We present the development of large generative language models for a less-resourced language. GaMS1B - Generative Model for Slovene with 1 billion parameters was created by continuing pretraining of the existing English OPT model. We developed a new tokenizer adapted to Slovene, Croatian, and English languages and used embedding initialization methods FOCUS and WECHSEL to transfer the embeddings from the English OPTmodel. We evaluate our models on several classification datasets from the Slovene suite of benchmarks and generative sentence simplification task SENTA. We only used af ew-shot in-context learning of our models, which are not yet instruction-tuned. For classification tasks, in this mode, the generative models lag behind the existing Slovene BERT-type models fine-tuned for specific tasks. On a sentence simplification task, the GaMS models achieve comparable or better per formance than the GPT-3.5-Turbo model.

Language:English
Keywords:large language models, generative models, knowledge transfer, OPT model, language adaptation
Work type:Other
Typology:1.08 - Published Scientific Conference Contribution
Organization:FRI - Faculty of Computer and Information Science
Publication status:Published
Publication version:Version of Record
Year:2024
Number of pages:Str. 485-511
PID:20.500.12556/RUL-164282 This link opens in a new window
UDC:004.8:81'322
COBISS.SI-ID:212016131 This link opens in a new window
Publication date in RUL:18.10.2024
Views:114
Downloads:7
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Record is a part of a monograph

Title:Jezikovne tehnologije in digitalna humanistika : zbornik konference
Editors:Špela Arhar Holdt, Tomaž Erjavec
Place of publishing:Ljubljana
Publisher:Inštitut za novejšo zgodovino, = Institute of Contemporary History
Year:2024
ISBN:978-961-7104-40-0
COBISS.SI-ID:211315971 This link opens in a new window

Licences

License:CC BY-SA 4.0, Creative Commons Attribution-ShareAlike 4.0 International
Link:http://creativecommons.org/licenses/by-sa/4.0/
Description:This Creative Commons license is very similar to the regular Attribution license, but requires the release of all derivative works under this same license.

Secondary language

Language:Slovenian
Title:Generativni model z milijardo parametrov za jezik z manj viri
Abstract:
Veliki jezikovni modeli so osnovna infrastruktura za sodobno obdelavo naravnega jezika. Za angleščino obstajajo številni komercialni in odprtokodni modeli, na primer ChatGPT, Llama, Falconin Mistral. Ker so ti modeli učeni večinoma na angleških besedilih, sta njihovo znanje in poznavanje jezikov ter družb z manj viri površna. Predstavljamo razvoj novega generativnega velikega jezikovnega modela za jezik z malo viri. Za slovenski model, imenovan GaMS1B (Generativni Model za Slovenščino),z 1 milijardo parametrov smo razvili nov tokenizator, prilagojen slovenščini, hrvaščini in angleščini, ter uporabili metodi inicializacije vektorskih vložitev FOCUS in WECHSEL za prenos vložitev iz obstoječega angleškega modela OPT. Zgrajene modele smo ovrednotili na slovenski zbirki klasifikacijskih učnih množic in na generativni nalogi poenostavljanja stavkov SENTA. Pri evalvaciji smo uporabili le učenje v kontekstu z nekaj učnimi primeri ter modele, ki še niso prilagojeni za sledenje navodilom. Pri takih nastavitvah so na klasifikacijskih nalogah zgrajeni generativni modeli zaostali za obstoječimi slovenskimi modeli tipa BERT, ki so bili prilagojeni za dane naloge. Pri nalogi poenostavljanja stavkov modeli GaMS dosegajo primerljive ali boljše rezultate kot modelGPT-3.5-Turbo.

Keywords:veliki jezikovni modeli, generativni modeli, prenos znanja, OPT model, GaMS model, jezikovno prilagajanje

Projects

Funder:ARIS - Slovenian Research and Innovation Agency
Name:Adaptive Natural Language Processing with the Help of Large Language Models

Funder:ARIS - Slovenian Research and Innovation Agency
Project number:P6-0411
Name:Jezikovni viri in tehnologije za slovenski jezik

Funder:ARIS - Slovenian Research and Innovation Agency
Project number:J7-3159
Name:Empirična podlaga za digitalno podprt razvoj pisne jezikovne zmožnosti

Funder:ARIS - Slovenian Research and Innovation Agency
Project number:L2-50070
Name:Tehnike vektorskih vložitev za medijske aplikacije

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back