Robust, Secure and Private Bayesian Inference
International Conference on Algorithmic Learning Theory (ALT), 2013
Abstract
This paper examines the robustness and privacy properties of Bayesian estimators under a general set of assumptions. These assumptions generalise the concept of differential privacy to arbitrary outcome spaces and distribution families. We demonstrate our results with a number of examples where they hold. We then prove general bounds on the change of the posterior distribution due to changes in the data. Finally, we prove finite sample bounds for privacy under a strong adversarial model.
View on arXivComments on this paper
