323

Bayesian Differential Privacy for Machine Learning

Boi Faltings
Abstract

We propose Bayesian differential privacy, a relaxation of differential privacy that provides more practical privacy guarantees for similarly distributed data, especially in difficult scenarios, such as deep learning. We derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant improvements in privacy guarantees for typical deep learning datasets, such as MNIST and CIFAR-10, in some cases bringing the privacy budget from 8 down to 0.5. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.

View on arXiv
Comments on this paper