Bringing Differential Private SGD to Practice: On the Independence of
Gaussian Noise and the Number of Training Rounds
In the context of DP-SGD each round communicates a local SGD update which leaks some new information about the underlying local data set to the outside world. In order to provide privacy, Gaussian noise is added to local SGD updates. However, privacy leakage still aggregates over multiple training rounds. Therefore, in order to control privacy leakage over an increasing number of training rounds, we need to increase the added Gaussian noise per local SGD update. This dependence of the amount of Gaussian noise on the number of training rounds may impose an impractical upper bound on (because cannot be too large) leading to a low accuracy global model (because the global model receives too few local SGD updates). This makes DP-SGD much less competitive compared to other existing privacy techniques. We show for the first time that for -differential privacy can be chosen equal to for . In many existing machine learning problems, is always large and . Hence, becomes ``independent'' of any choice with (aggregation of privacy leakage increases to a limit). This means that our only depends on rather than . This important discovery brings DP-SGD to practice -- as also demonstrated by experiments -- because can remain small to make the trained model have high accuracy even for large as usually happens in practice.
View on arXiv