izpis_h1_title_alt

Večnivojski nadzor in sklepanje na podlagi pravil za prilagajanje časovno-kritičnih aplikacij v oblaku
ID Taherizadeh, Salman (Author), ID Stankovski, Vlado (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (6,10 MB)
MD5: B49063F4C7DD0211354DE54ABED7B873

Abstract
Računalništvo v oblaku se dandanes uporablja za postavitev različnih programskih storitev, saj omogoča najemanje računskih virov po potrebi in enostavno nadgradljivost aplikacij. Sodobni pristopi programskega inženiringa omogočajo razvoj časovno-kritičnih oblačnih aplikacij na podlagi komponent, ki so nameščeni v vsebnikih. Tehnologije vsebnikov, kot so na primer Docker, Kubernetes, CoreOS, Swarm, OpenShift Origin in podobno, omogočajo razvoj zelo dinamičnih oblačnih aplikacij, pod pogojih stalno spreminjajočih obremenitev. Oblačne aplikacije, ki temeljijo na tehnologijah vsebnikov zahtevajo prefinjene metode samodejnega prilagajanja, z namenom delovanja pod različnimi pogoji delovnih obremenitev, na primer pod pogojih drastičnih sprememb delovnih obremenitev. Predstavljajmo si socialno omrežje, ki je oblačna aplikacija in v katerem se določena novica začne bliskovito širiti. Po eni strani potrebuje oblačna aplikacija zadosti računskih virov, še pred nastankom delovne obremenitve. Po drugi strani je najem dragih oblačnih infrastruktur v daljšem časovnem obdobju nepotreben in zato tudi nezaželen. Izbira metode samodejnega prilagajanja oblačne aplikacije tako pomembno vpliva na parametre kakovosti storitve, kot sta odzivni čas in stopnja uporabe računskih virov. Obstoječi sistemi za orkestracijo vsebnikov, kot sta npr. Kubernetes in sistem Amazon EC2, uporabljajo avtomatska pravila s statično določenimi pragovi, ki se zanašajo predvsem na infrastrukturne metrike, kot sta na primer uporaba procesorja in pomnilnika. V tej doktorski disertaciji predstavljamo novo metodo dinamičnega večstopenjskega (angl. Dynamic Multi-Level, DM) samodejnega prilagajanja oblačnih aplikcij, ki uporablja poleg infrastrukturnih metrik tudi aplikacijske metrike s spreminjajočimi se pragovi. Novo DM metodo smo vgradili v delujočo arhitekturo sistema za samoprilagajanje aplikacij. Novo metodo DM primerjamo s sedmimi obstoječimi metodami samodejnega prilagajanja pri različnih scenarijih sintetičnih in realne delovne obremenitve. Primerljivi pristopi samodejnega prilagajanja vključujejo metode Kubernetes Horizontal Pod Auto-scaling (HPA), Step Scaling 1 in 2 (SS1, SS2), Target Tracking Scaling 1 in 2 (TTS1, TTS2) ter Static Threshold Based Scaling 1 in 2 (THRES1, THRES2). Vse obravnavane metode samodejnega prilagajanja trenutno štejejo kot zelo napredni pristopi, ki se uporabljajo v proizvodnih sistemih, kot so sistemi temelječi na tehnologijah Kubernetes in Amazon EC2. Scenariji delovnih obremenitev, ki jih uporabljamo v tem delu predstavljajo vzorce vztrajno naraščajočih/padajočih, drastično spreminjajočih, rahlih sprememb ter dejanskih delovnih obremenitev. Na podlagi rezultatov poskusov, opravljenih za vsak vzorec delovnih obremenitev posebej, smo primerjali vseh osem izbranih metod samodejnega prilagajanja glede na odzivni čas in število instanciranih vsebnikov. Rezultati kot celota kažejo, da ima predlagana nova metoda DM večjo splošno samoprilagodljivost v primerjavi s preostalimi metodami. Zaradi zadovoljivih rezultatov smo predlagano metodo DM vgradili v sistem SWITCH za programski inženiring časovno-kritičnih oblačnih aplikacij. Pravila za samoprilagajanje aplikacij in druge informacije, kot so na primer lastnosti platform za virtualizacijo, trenutne obremenitve aplikacije, ponavljajoče se zahteve po višji kakovosti storitev in podobno, se nenehno shranjujejo v obliki Resource Description Framework (RDF) trojk v bazi znanja, ki je tudi vključena v predlagani arhitekturi. Ključna zahteva za razvoj baze znanja, je omogočiti vsem deležnikom programske platforme SWITCH, kot so na primer ponudniki oblačnih storitev, možnost integracije informacij, analizo daljših trendov in podporo strateškemu planiranju.

Language:Slovenian
Keywords:časovno-kritične aplikacije, računalništvo v oblaku, samoprilagajanje, dinamični pragovi, večnivojski nadzor, virtualizacija z uporabo vsebnikov
Work type:Doctoral dissertation
Organization:FRI - Faculty of Computer and Information Science
Year:2018
PID:20.500.12556/RUL-104058 This link opens in a new window
Publication date in RUL:03.10.2018
Views:1625
Downloads:455
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Multi-level monitoring and rule based reasoning in the adaptation of time-critical cloud applications
Abstract:
Nowadays, different types of online services are often deployed and operated on the cloud since it offers a convenient on-demand model for renting resources and easy-to-use elastic infrastructures. Moreover, the modern software engineering discipline provides means to design time-critical services based on a set of components running in containers. Container technologies, such as Docker, Kubernetes, CoreOS, Swarm, OpenShift Origin, etc. are enablers of highly dynamic cloud-based services capable to address continuously varying workloads. Due to their lightweight nature, they can be instantiated, terminated and managed very dynamically. Container-based cloud applications require sophisticated auto-scaling methods in order to operate under different workload conditions, such as drastically changing workload scenarios. Imagine a cloud-based social media network website in which a piece of news suddenly becomes viral. On the one hand, in order to ensure the users’ experience, it is necessary to allocate enough computational resources before the workload intensity surges at runtime. On the other hand, renting expensive cloud-based resources can be unaffordable over a prolonged period of time. Therefore, the choice of an auto-scaling method may significantly affect important service quality parameters, such as response time and resource utilisation. Current cloud providers, such as Amazon EC2 and container orchestration systems, such as Kubernetes employ auto-scaling rules with static thresholds and rely mainly on infrastructure-related monitoring data, such as CPU and memory utilisation. This thesis presents a new Dynamic Multi-Level (DM) auto-scaling method with dynamically changing thresholds used in auto-scaling rules which exploit not only infrastructure, but also application-level monitoring data. The new DM method is implemented to be employed according to our proposed innovative viable architecture for auto-scaling containerised applications. The new DM method is compared with seven existing auto-scaling methods in different synthetic and real-world workload scenarios. These auto-scaling approaches include Kubernetes Horizontal Pod Auto-scaling (HPA), 1\textsuperscript{st} method of Step Scaling (SS1), 2\textsuperscript{nd} method of Step Scaling (SS2), 1\textsuperscript{st} method of Target Tracking Scaling (TTS1), 2\textsuperscript{nd} method of Target Tracking Scaling (TTS2), 1\textsuperscript{st} method of static THRESHOLD-based scaling (THRES1), and 2\textsuperscript{nd} method of static Threshold-based scaling (THRES2). All investigated auto-scaling methods are currently considered as advanced approaches, which are used in production systems such as Kubernetes, Amazon EC2, etc. Workload scenarios which are examined in this work also consist of slowly rising/falling workload pattern, drastically changing workload pattern, on-off workload pattern, gently shaking workload pattern, and real-world workload pattern. Based on experimental results achieved for each workload pattern, all eight auto-scaling methods are compared according to the response time and the number of instantiated containers. The results as a whole show that the proposed DM method has better overall performance under varied amount of workloads than the other auto-scaling methods. Due to satisfactory results, the proposed DM method is implemented in the SWITCH software engineering system for time-critical cloud-based applications. Auto-scaling rules along with other properties, such as characteristics of virtualisation platforms, current workload, periodic QoS fluctuations and similar, are continuously stored as Resource Description Framework (RDF) triples in a Knowledge Base (KB), which is included in the proposed architecture. The primary reason to maintain the KB is to address different requirements of the SWITCH solution stakeholders, such as those of cloud-based service providers, allowing for seamless information integration, which can be used for long-term trends analysis and support to strategic planning.

Keywords:Time-critical applications, cloud computing, self-adaptation, dynamic thresholds, multi-level monitoring, container-based virtualization

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back