ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.14661
15
4

Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy

23 October 2023
Yingyu Lin
Yian Ma
Yu-Xiang Wang
Rachel Redberg
Zhiqi Bu
ArXivPDFHTML
Abstract

Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides ε\varepsilonε-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by (ε,δ)(\varepsilon,\delta)(ε,δ)-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing δ\deltaδ-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity (W∞W_\inftyW∞​) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., δ=0\delta=0δ=0). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in W∞W_\inftyW∞​ distance. We show that by combining our new techniques with a localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses.

View on arXiv
Comments on this paper