Currently, the increasing integration of dispersed renewable energy sources poses significant voltage fluctuations and power quality challenges at the distant end of a radial electric power distribution system. One of many solutions for such challenges are demand response units. With these units, we achieve balance between generation and consumption in electric power distribution system. Demand response units can be controlled in various ways. In this master thesis we study the agent’s control of demand response units in electrical distribution power system.
An agent is an Artificial Intelligence tool that simulates the actions of the aggregator. An agent teaches itself from input data to successfully choose actions from action pool. Agent needs to differentiate between desired and undesired actions. We introduce machine learning and reinforcement learning; both are described briefly. Approximate Q learning, a variation of reinforcement learning, was used in the Agent learning period. The basic model of the agent was improved with an expanded action pool. Different energy pool constrains were proposed and implemented.
Aggregator agent can directly affect the voltage profile in the grid by adjusting the consumption of demand response units. Aggregator agent can control demand response units by preparing a schedule for the next 24 hours. It can choose between a set of actions: neutral action, actions that charge the energy pool and actions that discharge the energy pool. Before the suggested schedules become active, they need to be approved by the Distribution System Operator (DSO). DSO uses a Traffic Light System (TLS) to check for voltage constrain violations in real time. If a violation occurs, the TLS punishes the agent financially. The punishment acts as a sort of a feedback information for the agent. Agent gradually learns to avoid such punishments.
To test the aforementioned procedure, we proposed a modified CIGRE Benchmark System of a European low voltage network. Real life solar power generation and consumption data were used in the model. Consumers, dispersed energy sources and demand response units were integrated in the grid.
We observed that the agent adapted well to the conditions in the grid. We concluded that the agent does not choose actions solely based on the electricity price. In the first third of the day, the electricity price is low and in the other two thirds, the price is relatively high. Low voltages occur in the early hours and high voltages occur in the midday, due to solar power generation. The agent is forced to charge the energy pool in the midday when the price is relatively high, so that it can discharge the energy pool when the price reaches its peak. Agent chooses its actions based on electricity prices and grid constraints.
|