ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.14106
31
11

Nearly Tight Black-Box Auditing of Differentially Private Machine Learning

23 May 2024
Meenatchi Sundaram Muthu Selva Annamalai
Emiliano De Cristofaro
ArXivPDFHTML
Abstract

This paper presents a nearly tight audit of the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box model. Our auditing procedure empirically estimates the privacy leakage from DP-SGD using membership inference attacks; unlike prior work, the estimates are appreciably close to the theoretical DP bounds. The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters. For models trained with theoretical ε=10.0\varepsilon=10.0ε=10.0 on MNIST and CIFAR-10, our auditing procedure yields empirical estimates of 7.217.217.21 and 6.956.956.95, respectively, on 1,000-record samples and 6.486.486.48 and 4.964.964.96 on the full datasets. By contrast, previous work achieved tight audits only in stronger (i.e., less realistic) white-box models that allow the adversary to access the model's inner parameters and insert arbitrary gradients. Our auditing procedure can be used to detect bugs and DP violations more easily and offers valuable insight into how the privacy analysis of DP-SGD can be further improved.

View on arXiv
Comments on this paper