Markov chain Monte Carlo where we modify the Markov chain

by Eric Auld   Last Updated September 11, 2019 19:20 PM

I have a general question about Markov chain Monte Carlo. It can be illustrated by an example: Gibbs sampling. In Gibbs sampling, we prove that the Markov chain $M_t$ is stationary, where $M_t$ selects a random index and updates it. But in practice we usually select the index deterministically. From the Gibbs sampling wikipedia page:

In practice, the suffix $j$ is not chosen at random, and the chain cycles through the suffixes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution (as long as the chain can access all states under the fixed ordering).

Why is this true?

The more general question is that it seems that often the Markov chain which has the nice theoretical results is not precisely the one we use. How do we then justify that it works well enough for our purposes? (You could say I'm looking for a more general explanation of the quoted section above, that applies to more than just Gibbs sampling.)



Related Questions


Updated August 09, 2015 21:08 PM

Updated January 11, 2019 15:20 PM

Updated October 01, 2018 22:20 PM

Updated August 02, 2019 09:20 AM

Updated February 02, 2018 11:20 AM