Dynamic Privacy Budget Allocation Improves Data Efficiency of
Differentially Private Gradient Descent
Conference on Fairness, Accountability and Transparency (FAccT), 2021
Abstract
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. A popular private learning framework is differentially private learning composed of many privatized gradient iterations by noising and clipping. Under the privacy constraint, it has been shown that the dynamic policies could improve the final iterate loss, namely the quality of published models. In this talk, we will introduce these dynamic techniques for learning rate, batch size, noise magnitude and gradient clipping. Also, we discuss how the dynamic policy could change the convergence bounds which further provides insight of the impact of dynamic methods.
View on arXivComments on this paper
