Piecewise Deterministic Markov Processes for Scalable Monte Carlo on Restricted Domains

Piecewise deterministic Monte Carlo methods (PDMC) consist of a class of continuous-time Markov chain Monte Carlo methods (MCMC) which have recently been shown to hold considerable promise. Being non-reversible, the mixing properties of PDMC methods often significantly outperform classical reversible MCMC competitors. Moreover, in a Bayesian context they can use sub-sampling ideas, so that they need only access one data point per iteration, whilst still maintaining the true posterior distribution as their invariant distribution. However, current methods are limited to parameter spaces of real d-dimensional vectors. We show how these algorithms can be extended to applications involving restricted parameter spaces. In simulations we observe that the resulting algorithm is more efficient than Hamiltonian Monte Carlo for sampling from truncated logistic regression models. The theoretical framework used to justify this extension lays the foundation for the development of other novel PDMC algorithms.
View on arXiv