izpis_h1_title_alt

Optimizacija procesov pridobivanja, transformacije in nalaganja podatkov z orodjem Spark in platformo Databricks
ID Rutar, Filip (Author), ID Sedlar, Urban (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (3,28 MB)
MD5: 166DC1A72A58D23702B7F537D82FC539

Abstract
Podjetja se danes soočajo z vse večjimi količinami podatkov, zato je njihova učinkovita obdelava ključna za pridobivanje vpogledov, optimizacijo poslovnih procesov in ohranjanje konkurenčne prednosti. Cilj magistrskega dela je predstaviti, preizkusiti in ovrednotiti vpliv različnih optimizacij na delovanje procesov pridobivanja, transformacije in nalaganja (ang. ETL) z uporabo orodja Apache Spark na platformi Databricks, s ciljem znižanja stroškov in izboljšanja hitrosti izvajanja. V teoretičnem delu naloge so predstavljeni osnovni koncepti podatkovnega inženiringa, vključno s strukturo in namenom posameznih faz ETL procesov ter njihovim ustvarjanjem. Sledi poglobljena predstavitev orodja Apache Spark, s poudarkom na osnovnih konceptih, kot sta MapReduce in podatkovna struktura odporen distribuiran podatkovni set (ang. RDD), ter tehnologijah, ki se uporabljajo za optimizacijo poizvedb. Primerjane so različne arhitekture za shranjevanje in upravljanje podatkov, kot so podatkovne baze, podatkovna skladišča in podatkovna jezera. Pomembna lastnost Sparka je sposobnost razdelitve nalog na manjše enote dela in njihove razporeditve med več strežnikov, pri čemer za nadzor izvajanja skrbi upravitelj gruče. Procesi Sparka lahko tečejo bodisi na lastni infrastrukturi ali z uporabo oblačnih storitev, kot je platforma Databricks. V praktičnem delu naloge so analizirani štirje ETL procesi. Razlikujejo se glede na zahtevnost, količino obdelanih podatkov, čas izvajanja in vrsto opravljenih operacij ter predstavljajo raznolik vzorec realnih podatkovnih procesov. Na teh procesih so preizkušene različne optimizacije, razdeljene v dve glavni kategoriji: optimizacije na ravni infrastrukture in optimizacije posameznih nalog znotraj procesa. Rezultati kažejo na pomembne izboljšave stroškovne učinkovitosti in hitrosti izvajanja. Največji vpliv je imela uporaba spot in fleet strežniških instanc, ki sta skupaj z drugimi optimizacijami omogočile do 80% znižanje stroškov in do 40% hitrejše izvajanje procesov. Dodatne optimizacije, kot so preureditev strukture in zaporedja nalog, uporaba eksplicitno določenih shem podatkov in lastnih funkcij, so še dodatno izboljšale zanesljivost procesov ter poenostavile njihovo ustvarjanje in vzdrževanje. Poleg tega sta še dve optimizaciji shranjevanja podatkov do 90% znižali strošek shranjevanja, ter hkrati izboljšali hitrost dostopa do podatkov.

Language:Slovenian
Keywords:podatkovni inženiring, Spark, Databricks, ETL
Work type:Master's thesis/paper
Organization:FE - Faculty of Electrical Engineering
Year:2024
PID:20.500.12556/RUL-162992 This link opens in a new window
Publication date in RUL:30.09.2024
Views:82
Downloads:135
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Optimization of Extract-Transform-Load Processes with Spark and the Databricks Platform
Abstract:
In today's data-driven world, companies face growing volumes of data, making efficient processing critical for gaining insights, optimizing business processes, and maintaining a competitive edge. The aim of this master's thesis is to present, test, and evaluate the impact of various optimizations on the performance of extract, transform, load processes (ETL) using Apache Spark on the Databricks platform, with the goal of reducing costs and improving execution speed. The first part of the thesis provides the theoretical background, reviewing the basic concepts of data engineering. It outlines the ETL processes, detailing the structure, purpose, and creation of each stage. This is followed by an in-depth presentation of Apache Spark, focusing on its core concepts, including MapReduce, Resilient Distributed Datasets (RDDs), and technologies used for query optimization. Various data storage architectures - such as databases, data warehouses and data lakes are also compared. A key feature of Spark is its ability to break tasks into smaller units of work and distribute them across multiple servers, with everything being orchestrated by a cluster manager. Spark processes can run on either self-hosted infrastructure or using cloud platforms like Databricks. In the practical part, four ETL processes of varying complexity, data volumes, execution times, and operation types are analysed. Multiple optimizations are tested on these processes, across two main categories: infrastructure-level optimizations and task-specific improvements. The results demonstrate significant improvements in both cost efficiency and execution speed. The most impactful optimization was the use of spot and fleet instance types, which, along with other strategies, resulted in up to an 80% cost reduction and up to a 40% reduction in execution time. Further optimizations, such as task reordering, the use of explicit data schemas, and user-defined functions, enhanced the reliability of the processes and simplified their creation and maintenance. Additional storage optimizations reduced storage costs by up to 90%, while improving data access performance.

Keywords:data engineering, Spark, Databricks, ETL

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back