Details

Analiza in primerjava uspešnosti velikih jezikovnih modelov pri generiranju spletnih aplikacij na podlagi nepopolnih pozivov
ID Slatnar, Klemen (Author), ID Smrdel, Aleš (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (873,83 KB)
MD5: A23BF171C3CEE2831C9317F31ADFCC9C

Abstract
Področji velikih jezikovnih modelov in razvoja aplikacij se prepletata, saj razvijalci spletnih aplikacij vse pogosteje uporabljajo velike jezikovne mo- dele kot programerske partnerje, v namen izboljšanja svoje učinkovitosti in produktivnosti. Kljub temu ostaja neraziskano, kateri veliki jezikovni mo- deli resnično zagotavljajo čim večjo produktivnost, vzporedno z zadovoljivo uporabniško izkušnjo pri razvoju spletnih aplikacij. Način, s katerim smo se odločili raziskati opisan problem, je z merjenjem interakcij izbranih modelov pri avtomatskem generiranju kode (aplikacij) za izbrane probleme ter UMUX vprašalnikom uporabnikom, ki so avtomatsko generirane aplikacije tudi te- stirali. Uporabljeni modeli v eksperimentu so bili Claude 3.7 Sonnet, Gemini 2.5 Pro Preview in GPT-4o. Vsakemu izmed izbranih velikih jezikovnih mo- delov smo zadali tri probleme s pomanjkljivimi in dvoumnimi navodili za generiranje spletnih aplikacij, s katerimi lahko preverimo njihovo sposobnost generiranja aplikacij na podlagi navodil v obliki pozivov (navodil, ki jih po- gosto dobijo programerji od svojih strank). Gledali smo, kako uspešni so ti modeli pri generiranju aplikacij in koliko dodatnih pozivov je bilo potrebnih za delujočo aplikacijo. Ugotovili smo, da največjo produktivnost razvijalcem prinaša Claude 3.7 Sonnet, saj je pri implementaciji zahteval najmanj pose- gov s strani razvijalca in je deloval najbolj samostojno. Generirane aplikacije smo nato še testirali s testiranjem uporabnikov, ki so nato odgovorili še na vprašalnik UMUX. Najboljšo povprečno UMUX oceno je imel Gemini 2.5 Pro Preview, vendar potrebuje več iteracij, kot Claude 3.7 Sonnet.

Language:Slovenian
Keywords:umetna inteligenca, veliki jezikovni modeli, internet, uporabniški vmesniki, spletne tehnologije, UMUX
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2025
PID:20.500.12556/RUL-170526 This link opens in a new window
COBISS.SI-ID:243133443 This link opens in a new window
Publication date in RUL:08.07.2025
Views:277
Downloads:58
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Analysis and Comparison of the Performance of Large Language Mod- els in Generating Web Applications Based on Incomplete Prompts
Abstract:
The fields of large language models (LLMs) and application development are increasingly intertwined, as web application developers are increasingly using LLMs as coding partners to improve their efficiency and productivity. However, it remains unclear which LLMs truly offer the highest productivity alongside a satisfactory user experience in the context of web application development. To explore this issue, we evaluated selected LLMs by measur- ing their interactions during the automatic generation of code (applications) for predefined tasks, and by using the UMUX questionnaire with users who tested the generated applications. The models used in the experiment were Claude 3.7 Sonnet, Gemini 2.5 Pro Preview, and GPT-4o. Each of these LLMs was assigned three tasks that included incomplete and ambiguous in- structions, allowing us to assess their ability to generate applications based on instructions in the form of prompts (the kind of instructions developers often receive from clients). We observed how successful the models were in generating functional applications and how many additional prompts were needed to produce a working solution. Our results show that Claude 3.7 Son- net delivers the highest developer productivity, requiring the least developer intervention and demonstrating the greatest degree of autonomy. The gener- ated applications were then evaluated through user testing, followed by the UMUX questionnaire. While Gemini 2.5 Pro Preview achieved the highest average UMUX score, it required more iterations than Claude 3.7 Sonnet.

Keywords:artificial intelligence, large language models, internet, user interfaces, web technologies, UMUX

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back