Beyond the Calibration Point: Mechanism Comparison in Differential Privacy

In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single -pair. This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given , and potentially introduces privacy vulnerabilities which can remain undetected. This motivates the need for robust, rigorous methods for comparing DP guarantees in such cases. Here, we introduce the -divergence between mechanisms which quantifies the worst-case excess privacy vulnerability of choosing one mechanism over another in terms of , -DP and in terms of a newly presented Bayesian interpretation. Moreover, as a generalisation of the Blackwell theorem, it is endowed with strong decision-theoretic foundations. Through application examples, we show that our techniques can facilitate informed decision-making and reveal gaps in the current understanding of privacy risks, as current practices in DP-SGD often result in choosing mechanisms with high excess privacy vulnerabilities.
View on arXiv@article{kaissis2025_2406.08918, title={ Beyond the Calibration Point: Mechanism Comparison in Differential Privacy }, author={ Georgios Kaissis and Stefan Kolek and Borja Balle and Jamie Hayes and Daniel Rueckert }, journal={arXiv preprint arXiv:2406.08918}, year={ 2025 } }