334

Bayesian Differential Privacy for Machine Learning

Boi Faltings
Abstract

We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.

View on arXiv
Comments on this paper