Details

Večagentno globoko spodbujevano učenje za avtomatsko skaliranje v oblačnih arhitekturah
ID Prodanov, Jovan (Author), ID Jurič, Branko Matjaž (Mentor) More about this mentor... This link opens in a new window, ID Hribar, Jernej (Comentor)

.pdfPDF - Presentation file, Download (2,55 MB)
MD5: 73FFA2BBCAAF68E6E2A63416F51FC0B0

Abstract
Sistemi, zasnovani z oblačno (angl. cloud-native) paradigmo, se zanašajo na elastičnost za dinamično prilagajanje virov glede na nepredvidljive obremenitve. Tradicionalni hevristični pristopi so pogosto nezadostni, ker ne morejo modelirati kompleksne dinamike obremenitev ali uskladiti razmerje med zmogljivostjo in izrabo virov. Namesto tega reagirajo šele po potrebi po replikaciji ali ponovnem zagonu kontejnerjev, kar lahko prekine neprekinjeno delovanje mikrostoritev v porazdeljenih okoljih. Ta omejitev je še izrazitejša v robno-oblačnih (angl. edge-cloud) okoljih, kjer heterogenost, spremenljivost omrežja in omejeni viri dodatno otežujejo nemoteno prilagajanje. Da bi premagali te izzive, predstavljamo MARLISE, ogrodje za natančno izvedeno prilagajanje mikrostoritev brez prekinitve. Razvito je bilo s tremi različicami algoritmov globokega spodbujevalnega učenja: Deep Q-Networks (DQN), Deterministic Policy Gradient (DDPG), in Proximal Policy Optimization (PPO). Vsako različico MARLISE smo ocenili in ugotovili, da zmanjšuje odzivne čase, izboljšuje učinkovitost uporabe virov, povečuje skalabilnost ter predstavlja obetavno rešitev za zagotavljanje elastičnosti virov v stanjskih (angl. stateful) mikrostoritvah v robno-oblačnih okoljih.

Language:Slovenian
Keywords:Kubernetes, Robni-oblak, Avtomatsko skaliranje, Večagentno globoko spodbujevano učenje, Skaliranje na mestu, Mikrostoritve
Work type:Master's thesis/paper
Typology:2.09 - Master's Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2025
PID:20.500.12556/RUL-175168 This link opens in a new window
COBISS.SI-ID:255582211 This link opens in a new window
Publication date in RUL:20.10.2025
Views:187
Downloads:43
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Multi-agent deep reinforcement learning for auto-scaling in cloud-native architectures
Abstract:
Cloud-native systems depend on elasticity to dynamically adjust resources in response to unpredictable workloads. Traditional threshold and heuristic approaches are often insufficient because they cannot model complex workload dynamics or balance trade-offs between performance and resource usage. Instead, they rely on reactive container replication or restarts, which can disrupt the continuity of microservices in distributed environments. This limitation is even more pronounced in edge-cloud environments, where heterogeneity, network variability, and limited resources make seamless scaling more difficult. To overcome these challenges, we introduce Multi-Agent Reinforcement Learning-based In-place Scaling Engine (MARLISE), a framework for precise in-place scaling of microservices. It is developed using three versions of Deep Reinforcement Learning algorithms: Deep Q-Networks (DQN), Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). We evaluated each version of MARLISE and found that it reduces response times, improves resource efficiency, and enhances scalability compared to heuristic methods. Our results show MARLISE as a promising solution for resource elasticity of stateful microservices in edge-cloud environments.

Keywords:Kubernetes, Edge-cloud, Auto-scaling, Multi-Agent Deep Reinforcement Learning, In-place scaling, Microservices

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back