Differential privacy provides a rigorous framework to quantify data privacy, and has received considerable interest recently. A randomized mechanism satisfying -differential privacy (DP) roughly means that, except with a small probability , altering a record in a dataset cannot change the probability that an output is seen by more than a multiplicative factor . A well-known solution to -DP is the Gaussian mechanism initiated by Dwork et al. [1] in 2006 with an improvement by Dwork and Roth [2] in 2014, where a Gaussian noise amount of [1] or of [2] is added independently to each dimension of the query result, for a query with -sensitivity . Although both classical Gaussian mechanisms [1,2] assume , our review finds that many studies in the literature have used the classical Gaussian mechanisms under values of and where the added noise amounts of [1,2] do not achieve -DP. We obtain such result by analyzing the optimal noise amount for -DP and identifying and where the noise amounts of classical mechanisms are even less than . Since has no closed-form expression and needs to be approximated in an iterative manner, we propose Gaussian mechanisms by deriving closed-form upper bounds for . Our mechanisms achieve -DP for any , while the classical mechanisms [1,2] do not achieve -DP for large given . Moreover, the utilities of our mechanisms improve those of [1,2] and are close to that of the optimal yet more computationally expensive Gaussian mechanism.
View on arXiv