Uniform ergodicity of the Particle Gibbs sampler
The particle Gibbs (PG) sampler is a systematic way of making use of a particle filter within Markov chain Monte Carlo (MCMC). This results in an off-the-shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in an MCMC scheme. We show that this algorithm is uniformly ergodic under rather general assumptions, that we will carefully review and discuss. In particular, we provide an explicit rate of convergence which reveals that: (i) for fixed number of data points , the mixing rate can be made arbitrarily good by increasing the number of particles, and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles as , where can be computed explicitly. Firstly, we show that under strong mixing conditions we have and, secondly, we study in detail a popular stochastic volatility model with a non-compact state space and show that any will suffice.
View on arXiv