Particle Metropolis adjusted Langevin algorithms

Particle MCMC has recently been introduced as a class of algorithms that can be used to analyse state-space models. They use MCMC moves to update the parameters of the model, and particle filters to both propose values for the latent state and to obtain estimates of the posterior density that are used to calculate the acceptance probability. For many applications it is easy to adapt the particle filter so that it also gives an estimate of the gradient of the log-posterior, and this estimate can then be used within the proposal distribution for the parameters. This results in a particle version of a Metropolis adjusted Langevin algorithm, which we call particle MALA. We investigate the theoretical properties of particle MALA under standard asymptotics, which correspond to an increasing dimension of the parameters, n. Our results show that the behaviour of particle MALA depends crucially on how accurately we can estimate the gradient of the log-posterior. If the error in the estimate of the gradient is not controlled sufficiently well as we increase dimension then asymptotically there will be no advantage in using particle MALA over a particle MCMC algorithm using a random-walk proposal. However if the error is well-behaved, then the optimal scaling of particle MALA proposals will be as compared to when a random walk proposal is used. Furthermore, we show that asymptotically the optimal acceptance rate is 15.47% and that we should tune the number of particles so that the variance of our estimate of the log-posterior is roughly 3. We also propose a novel implementation of particle MALA, based on the approach of Nemeth et al. (2013) for estimating the gradient of the log-posterior. Empirical results suggest that such an implementation is more efficient than other recently proposed particle MALA algorithms.
View on arXiv