Noiseless Privacy for Some Classes of Dependent Data

In this paper we consider the problem of revealing aggregated sensitive data without adding noise. The aim is to reveal some statistics from a data set in a way that the privacy of individuals is preserved in terms of differential privacy. This problem has been solved for many systems by adding some noise to the aggregated data or individual values (in distributed systems with untrusted aggregator). However, such approach leads to errors (due to adding noise) which are sometimes unacceptable in real life scenarios. In 2011 Bhaskar at al. pointed out that in many cases one can ensure sufficient level of privacy without adding noise by utilizing adversarial uncertainty. Informally speaking, this observation comes from the fact that if at least a part of the data is randomized from the adversary's point of view, it can be effectively used for hiding other values. In our paper we extend this idea and present some results for wider class of data. In particular we cover the data sets that are dependent. Moreover, in contrast to most of previous papers in this field, we give detailed (non-asymptotic) results which is motivated by practical reasons. Note that it required a modified approach and more subtle tools.
View on arXiv