Encyclopedia > Random walk Monte Carlo

  Article Content

Random walk Monte Carlo

Random walk Monte Carlo methods are a class of algorithms used mainly in Bayesian statistics[?] and computational physics to numerically calculate multi-dimensional integrals. In these methods, an ensemble of "walkers" moves around randomly. At each point where the walker steps, the function value at that point is counted towards the integral. The walker then may make a number of tentative steps around the area, looking for a place with reasonably high contribution to the integral to move into next. Random walk methods are a kind of random simulation or Monte Carlo method.

Markov chain Monte Carlo methods (or MCMC methods) are ones where the direction the walker is likely to move depends only on where the walker is, and what the function value is in the area. These methods are easy to implement and analyse, but unfortunately it can take a long time for the walker to explore all of the space. The walker will often double back and cover ground already covered. This problem is called "slow mixing".

More sophisticated algorithms use some method of preventing the walker from doubling back. For example, in "self avoiding walk" or SAW routines, the walker remembers where it has been before (at least for a few steps), and avoids stepping on those locations again. These algorithms are harder to implement, but may exhibit faster convergence (i.e. less steps for an accurate result). Various statistical problems can occur — for example, what happens when a walker paints itself into a corner?

Random walk algorithms

Rejection Monte Carlo Sampling[?]
Approximates a distribution with another distribution, known as a proposal density, from which samples can be drawn. Samples are drawn from the proposal density then conditionally rejected to ensure that the samples approximate the target density. This method is simple but does not scale well in high dimensions.
Adaptive Rejection Monte Carlo Sampling[?]
A variant of Rejection Sampling[?] that modifies the proposal density on the fly.
Metropolis-Hastings Markov Chain Monte Carlo Sampling
Generates a random walk using a proposal density and a method for rejecting proposed moves.
Gibbs Monte Carlo Sampling[?]
Requires all the marginal distributions[?] of the target distribution to be known in closed form. Gibbs sampling has the advantage that it does not display random walk behaviour. However, it can run into problems when variables are strongly correlated. When this happens, a technique called Simultaneous Over-relaxation can be used.
Hybrid Markov Chain Monte Carlo Sampling[?]
Tries to avoid random walk behaviour by introducing an auxiliary momentum vector and implementing Hamiltonian dynamics[?] where the potential function is the target density. The momentum samples are discarded after sampling. The end result of Hybrid MCMC is that proposals move across the sample space in larger steps and are therefore less correlated and converge to the target distribution more rapidly.
Slice Sampling Monte Carlo Sampling[?]
Depends on the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. This method alternates uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position.

Also see linear programming.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Royalist

...     Contents Royalist The noun or adjective, Royalist, can have several shades of meaning. At its simplest, it refers to an adherent of ...

 
 
 
This page was created in 32.1 ms