A Discrete Bouncy Particle Sampler

Markov Chain Monte Carlo (MCMC) algorithms provide Monte Carlo approximations to expectations with respect to a given probability distribution, , via an ergodic Markov chain whose invariant distribution is . Most MCMC methods operate in discrete time and are reversible with respect to the required probability density; however, it is now understood that non-reversible Markov chains can be beneficial in many contexts. In particular, the recently-proposed Bouncy Particle Sampler (BPS) leverages a continuous-time and non-reversible Markov process. Although the BPS empirically shows state-of-the-art performances when used to explore certain probability densities, in many situations it is not straightforward, or is impossible, to use. Implementing the BPS typically requires one to be able to compute local upper bounds on the gradient of the log target density. This, for example, rules out the use of the BPS for the wide class of problems when only evaluations of the log-density and its gradient are available. We present the Discrete Bouncy Particle Sampler (DBPS), a general algorithm based upon a guided random walk, a partial refreshment of velocity, and a delayed-rejection step. We show that the BPS can be understood as a scaling limit of a special case of the DBPS. In contrast to the BPS, implementing the DBPS only requires point-wise evaluation of the target-density and its gradient. We propose extensions of the basic DBPS for situations when the exact gradient of the target density is not available. We describe a limit of the process as dimension increases to infinity and the scaling decreases to zero, and leverage this to obtain a theoretical efficiency of the DBPS as a function of the partial-refreshment parameter, which leads to a simple and robust tuning criterion. Theoretical and empirical efficiency curves are then compared for different targets and algorithm variations.
View on arXiv