ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.04089
16
14

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees

7 November 2021
Oren Mangoubi
Nisheeth K. Vishnoi
ArXivPDFHTML
Abstract

For a ddd-dimensional log-concave distribution π(θ)∝e−f(θ)\pi(\theta) \propto e^{-f(\theta)}π(θ)∝e−f(θ) constrained to a convex body KKK, the problem of outputting samples from a distribution ν\nuν which is ε\varepsilonε-close in infinity-distance sup⁡θ∈K∣log⁡ν(θ)π(θ)∣\sup_{\theta \in K} |\log \frac{\nu(\theta)}{\pi(\theta)}|supθ∈K​∣logπ(θ)ν(θ)​∣ to π\piπ arises in differentially private optimization. While sampling within total-variation distance ε\varepsilonε of π\piπ can be done by algorithms whose runtime depends polylogarithmically on 1ε\frac{1}{\varepsilon}ε1​, prior algorithms for sampling in ε\varepsilonε infinity distance have runtime bounds that depend polynomially on 1ε\frac{1}{\varepsilon}ε1​. We bridge this gap by presenting an algorithm that outputs a point ε\varepsilonε-close to π\piπ in infinity distance that requires at most poly(log⁡1ε,d)\mathrm{poly}(\log \frac{1}{\varepsilon}, d)poly(logε1​,d) calls to a membership oracle for KKK and evaluation oracle for fff, when fff is Lipschitz. Our approach departs from prior works that construct Markov chains on a 1ε2\frac{1}{\varepsilon^2}ε21​-discretization of KKK to achieve a sample with ε\varepsilonε infinity-distance error, and present a method to directly convert continuous samples from KKK with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension ddd in the running time for the problem of sampling from a log-concave distribution on polytopes KKK with infinity distance ε\varepsilonε, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.

View on arXiv
Comments on this paper