I have a general question about Markov chain Monte Carlo. It can be illustrated by an example: Gibbs sampling. In Gibbs sampling, we prove that the Markov chain $M_t$ is stationary, where $M_t$ selects a random index and updates it. But in practice we usually select the index deterministically. From the Gibbs sampling wikipedia page:
In practice, the suffix $j$ is not chosen at random, and the chain cycles through the suffixes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution (as long as the chain can access all states under the fixed ordering).
Why is this true?
The more general question is that it seems that often the Markov chain which has the nice theoretical results is not precisely the one we use. How do we then justify that it works well enough for our purposes? (You could say I'm looking for a more general explanation of the quoted section above, that applies to more than just Gibbs sampling.)