Statistically efficient thinning of a Markov chain sampler
Revised to include some cases of autocorrelations that are not exactly of the autoregression form. It is common to subsample Markov chain samples to reduce the storage burden of the output. It is also well known that discarding out of every observations will not improve statistical efficiency. It is less frequently remarked that subsampling a Markov chain allows one to omit some of the computation beyond that needed to simply advance the chain. When this reduced computation is accounted for, thinning the Markov chain by subsampling it can improve statistical efficiency. The autocorrelation among Markov chain samples very often resembles a first order autoregressive process defined by a first order correlation parameter . For a given and a cost ratio , it is possible to compute the most efficient subsampling frequency . The optimal grows rapidly as increases towards . The resulting efficiency gain depends primarily on , not . Taking (no thinning) is optimal when . For it is optimal if and only if . This efficiency gain never exceeds . Statistical efficiencies depend on autocorrelations in a continuous way so thinning can still help when the autocorrelations are not precisely autoregressive. For autocorrelations bounded between those of two autoregressive processes, it is possible to compute a range of thinning factors to which the optimal one must belong.
View on arXiv