Markov chains are random processes where the probabilities of future events depend only on the current state of the process and not on events that have occurred in the past. When the Markov chain is ergodic, we observe its limiting behaviour. The limit theorem described in this paper is important in all further work with Markov chains and Monte Carlo Markov chains (MCMC), which are used in finding the properties of distributions that are otherwise not precisely known. An example of the MCMC algorithm is a Metropolis-Hastings one, which is used to solve the knapsack problem, known as an “NP-hard” problem in optimization. This paper describes the search for a solution with the construction of the Metropolis-Hastings chain, whose elements belong to the set of feasible solutions. As we increase the number of steps of the algorithm, and thus the number of suitable candidates found, the probability that we have found the best possible selection of items, that fit into a knapsack with a certain weight limit, increases as well.
|