51
18

No Free Lunch for Approximate MCMC

J. Johndrow
Natesh S. Pillai
Aaron Smith
Abstract

It is widely known that the performance of Markov chain Monte Carlo (MCMC) can degrade quickly when targeting computationally expensive posterior distributions, such as when the sample size is large. This has motivated the search for MCMC variants that scale well to large datasets. One general approach has been to look at only a subsample of the data at every step. In this note, we point out that well-known MCMC convergence results often imply that these "subsampling" MCMC algorithms cannot greatly improve performance. We apply these generic results to realistic statistical problems and proposed algorithms, and also discuss some design principles suggested by the results.

View on arXiv
Comments on this paper