HMC and underdamped Langevin united in the unadjusted convex smooth case

We consider a family of unadjusted generalized HMC samplers, which includes standard position HMC samplers and discretizations of the underdamped Langevin process. A detailed analysis and optimization of the parameters is conducted in the Gaussian case, which shows an improvement from to for the convergence rate in terms of the condition number by using partial velocity refreshment, with respect to classical full refreshments. A similar effect is observed empirically for two related algorithms, namely Metropolis-adjusted gHMC and kinetic piecewise-deterministic Markov processes. Then, a stochastic gradient version of the samplers is considered, for which dimension-free convergence rates are established for log-concave smooth targets over a large range of parameters, gathering in a unified framework previous results on position HMC and underdamped Langevin and extending them to HMC with inertia.
View on arXiv