izpis_h1_title_alt

Agentno vodenje prilagodljivih odjemalcev v distribucijskem sistemu
ID JERIHA, JAN (Author), ID Gubina, Andrej Ferdo (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (3,26 MB)
MD5: 5259A8BD57D7209A3F44356D71B98B80
PID: 20.500.12556/rul/81ca231b-fe75-4f4c-863e-239db709bf48

Abstract
V distribucijskem sistemu smo priča vedno večji integraciji obnovljivih razpršenih virov, zaradi katere prihaja do problemov s kakovostjo napetosti na koncu vodov. Ena od možnih rešitev za tovrstne probleme je prilagajanje odjema, s čimer poskrbimo, da ne prihaja do prevelikih lokalnih odstopanj med proizvodnjo in porabo v distribucijskem omrežju. Prilagodljive odjemalce lahko vodimo na več načinov. V magistrskem delu raziskujemo vpliv agentnega vodenja prilagodljivih odjemalcev v elektrodistribucijskem sistemu. Agent je orodje umetne inteligence, ki simulira ukrepe agregatorja. Naučen agent se glede na stanje, v katerem je in na podlagi vhodnih podatkov odloča o naslednji akciji. Agenta moramo predhodno naučiti o dobrobiti posameznih akcij. V nalogi opisujemo osnove strojnega učenja, s poudarkom na okrepljenem učenju. Približno Q učenje smo, kot eno izmed različic okrepljenega učenja, uporabili za učenje agenta. Osnovni model smo nadgradili z različnim številom akcij, ki so agentu na voljo in z različnim omejitvam energijskega bazena. Agent agregator z vključevanjem optimalne količine porabe prilagodljivih odjemalcev vpliva na napetostne razmere v omrežju. Z njimi upravlja tako, da oblikuje vozni red porabe zanje za naslednjih 24 ur. Na voljo ima nevtralno akcijo, akcije polnjenja in akcije praznjenja energijskega bazena. Vozni red mora pred izvedbo potrditi distribucijski sistemski operater s sistemom, ki preverja vpliv voznega reda prilagodljivih odjemalcev na omrežje. To izvaja s pomočjo sistema semaforja, ki preverja kršitve kakovosti napetosti in v primeru kršitev, agenta denarno kaznuje. S to povratno informacijo se agent nauči učinkovitega upravljanja in izogibanja kaznim. Za preizkus predstavljenega postopka smo uporabili spremenjeni CIGRE model tipičnega evropskega nizkonapetostnega omrežja. V danem modelu so bili uporabljeni realni podatki proizvodnje sončnih elektrarn in porabe bremen. V omrežju so prisotni odjemalci, razpršeni viri in enote s prilagodljivim odjemom. Ugotovili smo, da se agent dobro prilagodi na razmere v omrežju. Agent se glede polnjenja in praznjenja ne odloča izključno glede na ceno elektrike, ki je v prvi tretjini dneva nizka v drugih dveh pa relativno visoka, temveč poleg cene elektrike upošteva tudi razmere v omrežju. Zaradi napetostnih omejitev, ki nastanejo zaradi proizvodnje sončnih elektrarn je agent prisiljen v polnjenje v obdobju, ko je cena visoka, a ne najvišja. Šele takrat lahko agent izkoristi energijski bazen in ga izprazni med večernimi urami, ko je cena najvišja.

Language:Slovenian
Keywords:Enote s prilagodljivim odjemom, kakovost napetosti v nizkonapetostnem omrežju, okrepljeno učenje, približno Q učenje, vpliv omejitev omrežja
Work type:Master's thesis/paper
Organization:FE - Faculty of Electrical Engineering
Year:2018
PID:20.500.12556/RUL-99954 This link opens in a new window
Publication date in RUL:26.02.2018
Views:1741
Downloads:609
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Agent-based control of flexible consumers in the distribution system
Abstract:
Currently, the increasing integration of dispersed renewable energy sources poses significant voltage fluctuations and power quality challenges at the distant end of a radial electric power distribution system. One of many solutions for such challenges are demand response units. With these units, we achieve balance between generation and consumption in electric power distribution system. Demand response units can be controlled in various ways. In this master thesis we study the agent’s control of demand response units in electrical distribution power system. An agent is an Artificial Intelligence tool that simulates the actions of the aggregator. An agent teaches itself from input data to successfully choose actions from action pool. Agent needs to differentiate between desired and undesired actions. We introduce machine learning and reinforcement learning; both are described briefly. Approximate Q learning, a variation of reinforcement learning, was used in the Agent learning period. The basic model of the agent was improved with an expanded action pool. Different energy pool constrains were proposed and implemented. Aggregator agent can directly affect the voltage profile in the grid by adjusting the consumption of demand response units. Aggregator agent can control demand response units by preparing a schedule for the next 24 hours. It can choose between a set of actions: neutral action, actions that charge the energy pool and actions that discharge the energy pool. Before the suggested schedules become active, they need to be approved by the Distribution System Operator (DSO). DSO uses a Traffic Light System (TLS) to check for voltage constrain violations in real time. If a violation occurs, the TLS punishes the agent financially. The punishment acts as a sort of a feedback information for the agent. Agent gradually learns to avoid such punishments. To test the aforementioned procedure, we proposed a modified CIGRE Benchmark System of a European low voltage network. Real life solar power generation and consumption data were used in the model. Consumers, dispersed energy sources and demand response units were integrated in the grid. We observed that the agent adapted well to the conditions in the grid. We concluded that the agent does not choose actions solely based on the electricity price. In the first third of the day, the electricity price is low and in the other two thirds, the price is relatively high. Low voltages occur in the early hours and high voltages occur in the midday, due to solar power generation. The agent is forced to charge the energy pool in the midday when the price is relatively high, so that it can discharge the energy pool when the price reaches its peak. Agent chooses its actions based on electricity prices and grid constraints.

Keywords:Demand Response Units, Power Quality in Low Voltage Grid, Reinforcement Learning, Approximate Q Learning, Grid Constraints

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back