349

Beyond the Calibration Point: Mechanism Comparison in Differential Privacy

Main:9 Pages
8 Figures
Bibliography:2 Pages
Appendix:10 Pages
Abstract

In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single (ε,δ)(\varepsilon, \delta)-pair. This practice overlooks that DP guarantees can vary substantially \emph{even between mechanisms sharing a given (ε,δ)(\varepsilon, \delta)}, and potentially introduces privacy vulnerabilities which can remain undetected. This motivates the need for robust, rigorous methods for comparing DP guarantees in such cases. Here, we introduce the Δ\Delta-divergence between mechanisms which quantifies the worst-case excess privacy vulnerability of choosing one mechanism over another in terms of (ε,δ)(\varepsilon, \delta), ff-DP and in terms of a newly presented Bayesian interpretation. Moreover, as a generalisation of the Blackwell theorem, it is endowed with strong decision-theoretic foundations. Through application examples, we show that our techniques can facilitate informed decision-making and reveal gaps in the current understanding of privacy risks, as current practices in DP-SGD often result in choosing mechanisms with high excess privacy vulnerabilities.

View on arXiv
Comments on this paper