315

Faster Sampling from Log-Concave Distributions over Polytopes via a Soft-Threshold Dikin Walk

Abstract

We consider the problem of sampling from a dd-dimensional log-concave distribution π(θ)ef(θ)\pi(\theta) \propto e^{-f(\theta)} constrained to a polytope KK defined by mm inequalities. Our main result is a "soft-threshold'' variant of the Dikin walk Markov chain that requires at most O((md+dL2R2)×mdω1)log(wδ))O((md + d L^2 R^2) \times md^{\omega-1}) \log(\frac{w}{\delta})) arithmetic operations to sample from π\pi within error δ>0\delta>0 in the total variation distance from a ww-warm start, where LL is the Lipschitz-constant of ff, KK is contained in a ball of radius RR and contains a ball of smaller radius rr, and ω\omega is the matrix-multiplication constant. When a warm start is not available, it implies an improvement of O~(d3.5ω)\tilde{O}(d^{3.5-\omega}) arithmetic operations on the previous best bound for sampling from π\pi within total variation error δ\delta, which was obtained with the hit-and-run algorithm, in the setting where KK is a polytope given by m=O(d)m=O(d) inequalities and LR=O(d)LR = O(\sqrt{d}). When a warm start is available, our algorithm improves by a factor of d2d^2 arithmetic operations on the best previous bound in this setting, which was obtained for a different version of the Dikin walk algorithm. Plugging our Dikin walk Markov chain into the post-processing algorithm of Mangoubi and Vishnoi (2021), we achieve further improvements in the dependence of the running time for the problem of generating samples from π\pi with infinity distance bounds in the special case when KK is a polytope.

View on arXiv
Comments on this paper