On automating Markov chain Monte Carlo for a class of spatial models

Markov chain Monte Carlo (MCMC) algorithms provide a very general recipe for estimating properties of complicated distributions. While their use has become commonplace and there is a large literature on MCMC theory and practice, MCMC users still have to contend with several challenges with each implementation of the algorithm. These challenges include determining how to construct an efficient algorithm, finding reasonable starting values, deciding whether the sample-based estimates are accurate, and determining an appropriate length (stopping rule) for the Markov chain. We describe an approach for resolving these issues in a theoretically sound fashion in the context of spatial generalized linear models, an important class of models that result in challenging posterior distributions. Our approach combines analytical approximations for constructing provably fast mixing MCMC algorithms, and takes advantage of recent developments in MCMC theory. We apply our methods to real data examples, and find that our MCMC algorithm is automated and efficient. Furthermore, since starting values, rigorous error estimates and theoretically justified stopping rules for the sampling algorithm are all easily obtained for our examples, our MCMC-based estimation is practically as easy to perform as Monte Carlo estimation based on independent and identically distributed draws.
View on arXiv