Sampling from Log-Concave Distributions over Polytopes via a Soft-Threshold Dikin Walk

Given a Lipschitz or smooth convex function for a bounded polytope defined by inequalities, we consider the problem of sampling from the log-concave distribution constrained to . Interest in this problem derives from its applications to Bayesian inference and differentially private learning. Our main result is a generalization of the Dikin walk Markov chain to this setting that requires at most arithmetic operations to sample from within error in the total variation distance from a -warm start. Here is the Lipschitz-constant of , is contained in a ball of radius and contains a ball of smaller radius , and is the matrix-multiplication constant. Our algorithm improves on the running time of prior works for a range of parameter settings important for the aforementioned learning applications. Technically, we depart from previous Dikin walks by adding a "soft-threshold" regularizer derived from the Lipschitz or smoothness properties of to the log-barrier function for that allows our version of the Dikin walk to propose updates that have a high Metropolis acceptance ratio for , while at the same time remaining inside the polytope .
View on arXiv