11
10

Unified Enhancement of Privacy Bounds for Mixture Mechanisms via ff-Differential Privacy

Abstract

Differentially private (DP) machine learning algorithms incur many sources of randomness, such as random initialization, random batch subsampling, and shuffling. However, such randomness is difficult to take into account when proving differential privacy bounds because it induces mixture distributions for the algorithm's output that are difficult to analyze. This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent (DP-GD) with random initializations using ff-DP. We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results based on (ϵ,δ)(\epsilon,\delta)-DP. Moreover, we investigate the effects of random initialization on the privacy of one-iteration DP-GD. Our numerical computations of the trade-off function indicate that random initialization can enhance the privacy of DP-GD. Our analysis of ff-DP guarantees for these mixture mechanisms relies on an inequality for trade-off functions introduced in this paper. This inequality implies the joint convexity of FF-divergences. Finally, we study an ff-DP analog of the advanced joint convexity of the hockey-stick divergence related to (ϵ,δ)(\epsilon,\delta)-DP and apply it to analyze the privacy of mixture mechanisms.

View on arXiv
Comments on this paper