This proposal density would generate samples centred around the current state with variance σ2I. So we draw a new proposal state from Q(xt,x') and then calculate a value
where
is the likelihood ratio between the proposed sample x' and the previous sample xt, and
is the ratio of the proposal density in two directions (from xt to x' and vice versa). This is equal to 1 if the proposal density is symmetric. Then the new state xt+1 is chosen with the rule
The Markov chain is started from a random initial value x0 and the algorithm is run for a few thousand iterations so that this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The algorithm works best if the proposal density matches the shape of the target distribution P(x), but in most cases this is unknown. If a Gaussian proposal is used the variance parameter σ2 has to be tuned during the burn-in period. This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last N samples. This is usually set to be around 60%. If the proposal steps are too small the chain will mix slowly i.e. it will move around the space slowly and converge slowly to P(x). If the proposal steps are too large the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density so a1 will be very small.
Search Encyclopedia
|
Featured Article
|