76
0

Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios

Abstract

Auditing Differentially Private Stochastic Gradient Descent (DP-SGD) in the final model setting is challenging and often results in empirical lower bounds that are significantly looser than theoretical privacy guarantees. We introduce a novel auditing method that achieves tighter empirical lower bounds without additional assumptions by crafting worst-case adversarial samples through loss-based input-space auditing. Our approach surpasses traditional canary-based heuristics and is effective in final model-only scenarios. Specifically, with a theoretical privacy budget of ε=10.0\varepsilon = 10.0, our method achieves empirical lower bounds of 4.9144.914, compared to the baseline of 4.3854.385 for MNIST. Our work offers a practical framework for reliable and accurate privacy auditing in differentially private machine learning.

View on arXiv
@article{yoon2025_2412.01756,
  title={ Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios },
  author={ Sangyeon Yoon and Wonje Jeung and Albert No },
  journal={arXiv preprint arXiv:2412.01756},
  year={ 2025 }
}
Comments on this paper