Bringing Differential Private SGD to Practice: On the Independence of
Gaussian Noise and the Number of Training Rounds
In DP-SGD each round communicates a local SGD update which leaks some new information about the underlying local data set to the outside world. In order to provide privacy, Gaussian noise with standard deviation is added to local SGD updates after performing a clipping operation. We show that for attaining -differential privacy can be chosen equal to for , where is the total number of rounds and is equal to the size of the local data set. In many existing machine learning problems, is always large and . Hence, becomes "independent" of any choice with . This means that our only depends on rather than . As shown in our paper, this differential privacy characterization allows one to {\it a-priori} select parameters of DP-SGD based on a fixed privacy budget (in terms of and ) in such a way to optimize the anticipated utility (test accuracy) the most. This ability of planning ahead together with 's independence of (which allows local gradient computations to be split among as many rounds as needed, even for large as usually happens in practice) leads to a {\it proactive DP-SGD algorithm} that allows a client to balance its privacy budget with the accuracy of the learned global model based on local test data. We notice that the current state-of-the art differential privacy accountant method based on -DP has a closed form for computing the privacy loss for DP-SGD. However, due to its interpretation complexity, it cannot be used in a simple way to plan ahead. Instead, accountant methods are only used for keeping track of how privacy budget has been spent (after the fact).
View on arXiv