izpis_h1_title_alt

Globoko spodbujevalno učenje strategije za trgovanje z električno energijo : magistrsko delo
ID Golob, Ana (Author), ID Todorovski, Ljupčo (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (1,84 MB)
MD5: F8DB0DAA199F79ECE837FEDD61A7B3FB

Abstract
Trgovanje na organiziranih trgih z električno energijo zaradi velike kompleksnosti in stohastičnega značaja trgov predstavlja velik izziv. Danes so pri iskanju trgovalnih strategij v veliko pomoč številni napredni pristopi iz področja analize podatkov in umetne inteligence. Tekom tega dela želimo raziskati možnost uporabe metod globokega spodbujevalnega učenja za namene iskanja trgovalne strategije na trgu z električno energijo. Pri omenjenih metodah dogajanje na trgu modeliramo z uporabo Markovskih odločitvenih procesov. Za iskanje učinkovite strategije pa implementiramo agenta spodbujevalnega učenja, ki se na podlagi preteklih podatkov uči optimalne trgovalne strategije. Magistrsko delo precejšno pozornost namenja teoriji spodbujevalnega učenja, ki pri usmerjanju učenja uporablja nagrade in kazni. Znotraj spodbujevalnega učenja obstajajo različni pristopi, v tem delu se osredotočamo na metodo učenja Q. Ker ima trg električne energije neskončno možnih stanj potrebujemo razširitev omenjene metode z uporabo globokih nevronskih mrež. V magistrskem delu tako uporabimo algoritem globokega učenja Q za učenje strategije trgovalnega agenta. Učinkovitost naučene strategije merimo preko višine skupnega dobička in metrik, kot sta npr. povprečna marža in odstotek poslov, ki so zaprti s pozitivnim dobičkom. Znotraj tega dela implementirani algoritem za trgovanje z električno energijo zagotavlja stabilno in dovolj hitro konvergenco učenja. Rezultati kažejo, da z implementiranim učnim algoritmom precej izboljšamo naključno strategijo trgovanja na testni množici.

Language:Slovenian
Keywords:trgovanje z električno energijo, globoko spodbujevalno učenje, algoritem učenja Q
Work type:Master's thesis/paper
Organization:FMF - Faculty of Mathematics and Physics
Year:2020
PID:20.500.12556/RUL-122395 This link opens in a new window
UDC:004.42
COBISS.SI-ID:43667971 This link opens in a new window
Publication date in RUL:09.12.2020
Views:904
Downloads:228
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Deep reinforcement learning of a strategy for electricity trading
Abstract:
Trading with electricity presents great challenges due to its complexity and stochastic characteristics. Nowadays, when establishing new trading strategies, emerging concepts and approaches from data analysis and artificial intelligence are of great help and importance. In this master thesis our goal is to explore different possibilities for using deep reinforcement learning methods as a basis for new electricity trading strategies. Electricity markets are thus modelled as Markov decision processes using deep reinforcement learning methods. For the establishment of an efficient strategy we implement a reinforcement learning agent, which uses historical trade data to learn the optimal trading strategy. In this master thesis there is a big emphasis on the theory of reinforcement learning, which defines punishments and rewards as two leading concepts during the learning process. There are different approaches inside the reinforcement learning theory and praxis but we focus on the Q learning method. Because the electricity market has an infinite number of possible states, there is a need to widen and upgrade the Q learning method which is addressed with the use of deep neural networks. The result is a deep Q learning algorithm, which is used to train the agent’s trading strategy. Effectiveness of developed strategy is measured by total profit and other metrics such as average margin and win ratio. In this master thesis we developed an algorithm for electricity trading that provide a learning convergency that is stable and of sufficient speed. Results of trading with implemented algorithm indicate significant improvements when compared to random trading.

Keywords:electricity trading, deep reinforcement learning, Q learning algorithm

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back