izpis_h1_title_alt

Stohastična optimizacija v diskretnem času : delo diplomskega seminarja
ID Pavšič, Darjan (Author), ID Perman, Mihael (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (294,20 KB)
MD5: EBFADE3BFFE0F12382568B131BAFE374

Abstract
V nalogi sta predstavljena problema deterministične in stohastične optimizacije v diskretnem času. Ogledamo si idejo tovrstnih modelov v intuitivnem smislu in motivacijo za njihovo uporabo. Ob upoštevanju smiselnih predpostavk izpeljemo načine reševanja s pomočjo dinamičnega programiranja in Hamilton--Jacobi--Bellmanove enačbe tako za končni kot tudi za neskončni horizont v diskretnem času. Za lažje razumevanje teorije je skozi nalogo predstavljenih več primerov, od povsem preprostih, do malce bolj zahtevnih, ki pomagajo razumeti snov in so hkrati lahko osnova za bolj sofisticirane modele. Za grafično ponazoritev koristnosti stohastične optimizacije v diskretnem času pa je v nalogi izrisanih tudi nekaj grafov, dobljenih s simulacijami primerov ob uporabi optimalne kontrole in brez nje.

Language:Slovenian
Keywords:deterministična optimizacija, stohastična optimizacija, dinamično programiranje, HJB enačba, vrednostna funkcija, funkcija koristi, optimalna kontrola
Work type:Final seminar paper
Typology:2.11 - Undergraduate Thesis
Organization:FMF - Faculty of Mathematics and Physics
Year:2020
PID:20.500.12556/RUL-119938 This link opens in a new window
UDC:519.8
COBISS.SI-ID:58665475 This link opens in a new window
Publication date in RUL:13.09.2020
Views:1439
Downloads:147
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Discrete-time stochastic control
Abstract:
In the paper we present both discrete-time deterministic and stochastic control. We look at the idea of such models in intuitive sense and the motivation for using them. Taking into account the reasonable assumptions we develop the solving processes using dynamic programming and Hamilton--Jacobi--Bellman equation for both finite and infinite horizon discrete-time. To help understand the theory better, we introduce numerous examples, from very simple ones to more complex models, which facilitate understanding of the topic and can be used as a basis for development of more sophisticated models. Some graphs obtained by simulations of the examples are also included and serve as a good graphic demonstration of the benefits of discrete-time stochastic control.

Keywords:deterministic control, stochastic control, dynamic programming, HJB equation, value function, utility function, optimal control

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back