Proximal Markov chain Monte Carlo algorithms

This paper presents two new Langevin Markov chain Monte Carlo methods that use convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability distributions that is widely used in modern high-dimensional statistics and data analysis. The methods are based on a new first-order approximation for Langevin diffusions that exploits log-concavity to construct Markov chains with favourable convergence properties. This approximation is closely related to Moreau-Yoshida regularisations for convex functions and uses proximity mappings instead of gradient mappings to approximate the continuous-time process. The first method presented in this paper is an unadjusted Langevin algorithm for log-concave distributions. The method is shown to be stable and geometrically ergodic in many cases for which the conventional unadjusted Langevin algorithm is transient or explosive. The second method is a new Metropolis adjusted Langevin algorithm that complements the unadjusted algorithm with a Metropolis-Hastings step guaranteeing convergence to the desired target density. It is shown that this method inherits the convergence properties of the unadjusted algorithm and is geometrically ergodic in many cases for which the conventional Metropolis adjusted Langevin algorithm is not. The proposed methodology is demonstrated on two challenging high-dimensional signal processing applications related to audio compressive sensing and image resolution enhancement.
View on arXiv