ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.05855
377
84
v1v2v3v4v5 (latest)

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

Neural Information Processing Systems (NeurIPS), 2021
11 February 2021
R. Chourasia
Jiayuan Ye
Reza Shokri
    FedML
ArXiv (abs)PDFHTML
Abstract

What is the information leakage of an iterative learning algorithm about its training data, when the internal state of the algorithm is \emph{not} observable? How much is the contribution of each specific training epoch to the final leakage? We study this problem for noisy gradient descent algorithms, and model the \emph{dynamics} of R\'enyi differential privacy loss throughout the training process. Our analysis traces a provably tight bound on the R\'enyi divergence between the pair of probability distributions over parameters of models with neighboring datasets. We prove that the privacy loss converges exponentially fast, for smooth and strongly convex loss functions, which is a significant improvement over composition theorems. For Lipschitz, smooth, and strongly convex loss functions, we prove optimal utility for differential privacy algorithms with a small gradient complexity.

View on arXiv
Comments on this paper