izpis_h1_title_alt

Markovske verige in Metropolis-Hastingsov algoritem : delo diplomskega seminarja
ID Zupan, Iza (Author), ID Bernik, Janez (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (516,43 KB)
MD5: 0BA674F9E1930D5BC9F046512977BFE7

Abstract
Markovske verige so slučajni procesi, kjer so verjetnosti za prihodnje dogodke odvisne le od trenutnega stanja. Kadar je markovska veriga ergodična, opazujemo njeno limitno vedenje. Limitni izrek, ki je opisan v tem diplomskem delu, je ključnega pomena pri nadaljnjem delu z markovskimi verigami in Monte Carlo markovskimi verigami (MCMV), ki jih uporabljamo pri določanju lastnosti porazdelitev, ki jih sicer ne poznamo natančno. Primer MCMV algoritma je tudi Metropolis-Hastingsov, s katerim rešujemo problem nahrbtnika, ki je poznan kot “NP-težek” optimizacijski problem. V tem delu je opisano iskanje rešitve problema s konstrukcijo Metropolis-Hastingsove verige, katere členi so elementi množice dopustnih rešitev. Z večanjem števila korakov, ki jih algoritem naredi, in s tem števila najdenih primernih kandidatov, pa se veča tudi verjetnost, da smo našli najboljši možni izbor predmetov, ki jih še lahko zložimo v nahrbtnik z določeno omejitvijo teže.

Language:Slovenian
Keywords:markovske verige, slučajni proces, Monte Carlo markovske verige, Metropolis-Hastingsov algoritem, problem nahrbtnika
Work type:Final seminar paper
Typology:2.11 - Undergraduate Thesis
Organization:FMF - Faculty of Mathematics and Physics
Year:2023
PID:20.500.12556/RUL-148431 This link opens in a new window
UDC:519.2
COBISS.SI-ID:162120451 This link opens in a new window
Publication date in RUL:23.08.2023
Views:1392
Downloads:93
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Markov chains and Metropolis-Hastings algorithm
Abstract:
Markov chains are random processes where the probabilities of future events depend only on the current state of the process and not on events that have occurred in the past. When the Markov chain is ergodic, we observe its limiting behaviour. The limit theorem described in this paper is important in all further work with Markov chains and Monte Carlo Markov chains (MCMC), which are used in finding the properties of distributions that are otherwise not precisely known. An example of the MCMC algorithm is a Metropolis-Hastings one, which is used to solve the knapsack problem, known as an “NP-hard” problem in optimization. This paper describes the search for a solution with the construction of the Metropolis-Hastings chain, whose elements belong to the set of feasible solutions. As we increase the number of steps of the algorithm, and thus the number of suitable candidates found, the probability that we have found the best possible selection of items, that fit into a knapsack with a certain weight limit, increases as well.

Keywords:Markov chains, stochastic process, Monte Carlo Markov chains, Metropolis-Hastings algorithm, the knapsack problem

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back