Details

Zmanjševanje velikosti jezikovnih modelov s kvantizacijo
ID Premuš, Luka (Author), ID Hočevar, Tomaž (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (958,35 KB)
MD5: 38BDDE13BC6DCE81EDECB6245A7984AC

Abstract
Veliki jezikovni modeli, kot je BERT, so preobrazili obdelavo naravnega jezika, vendar njihova velika velikost in računska zahtevnost ovirata njihovo širšo uporabo, zlasti na napravah z omejenimi viri. Ta diplomska naloga obravnava problem zmanjševanja velikosti jezikovnih modelov z uporabo kvantizacije, tehnike, ki zmanjšuje številsko predstavitev uteži in aktivacij modela. Osredotoča se na metode kvantizacije po učenju (PTQ), natančneje na dinamično in statično kvantizacijo, ki sta implementirani in ovrednoteni na klasifikacijskem modelu BERT z uporabo knjižnice ONNX Runtime. Teoretično je predstavljena tudi kvantizacija med učenjem (QAT), ter ostale pogosto uporabljene metode zmanjševanja velikosti jezikovnih modelov. Analiziran je vpliv tehnike PTQ na velikost modela, hitrost sklepanja in napovedno natančnost. Rezultati kažejo, da kvantizacija znatno zmanjša velikost modela in pospeši sklepanje. Dinamična kvantizacija pri modelu BERT doseže dobro ravnovesje med kompresijo in ohranjanjem natančnosti, medtem ko osnovna statična kvantizacija povzroči opazno poslabšanje zmogljivosti. Delo tako ponuja pregled tehnik kvantizacije in praktično oceno kompromisov pri uporabi kvantizacije po treniranju na modelu BERT.

Language:Slovenian
Keywords:jezikovni model, BERT, kvantizacija po učenju, kvantizacija med učenjem, zmanjševanje porabe pomnilnika, pohitritev sklepanja
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2025
PID:20.500.12556/RUL-170762 This link opens in a new window
COBISS.SI-ID:243967747 This link opens in a new window
Publication date in RUL:15.07.2025
Views:250
Downloads:54
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Reducing the size of language models through quantization
Abstract:
Large language models such as BERT have transformed natural language processing, but their large size and computational complexity hinder their widespread use, especially on resource-constrained devices. This thesis addresses the problem of reducing the size of language models using quantization, a technique that reduces the numerical representation of model weights and activations. It focuses on post-training quantization (PTQ) methods, specifically dynamic and static quantization, implemented and evaluated on the BERT classification model using the ONNX Runtime Library. Quatization aware training (QAT) is also theoretically presented, as well as other commonly used methods for reducing the size of language models. The impact of implemented PTQ methods on model size, inference speed and predictive accuracy is analyzed. The results show that quantization significantly reduces model size and speeds up inference. Dynamic quantization in the BERT model achieves a good balance between compression and accuracy preservation, while basic static quantization results in a noticeable performance degradation. The work thus provides an overview of quantization techniques and a practical assessment of the trade-offs in applying quantization after training on the BERT model.

Keywords:language Model, BERT, post training quantization, quantization aware training, lower memory usage, faster inference

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back