ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.07474
101
4
v1v2v3v4v5 (latest)

Information Theory of Data Privacy

22 March 2017
Genqiang Wu
Xianyao Xia
Yeping He
    FedML
ArXiv (abs)PDFHTML
Abstract

By combining Shannon's cryptography model with an assumption to the lower bound of adversaries' uncertainty to the queried dataset, we develop a secure Bayesian inference-based privacy model and then in some extent answer Dwork et al.'s question [1]: "why Bayesian risk factors are the right measure for privacy loss". This model ensures an adversary can only obtain little information of each individual from the model's output if the adversary's uncertainty to the queried dataset is larger than the lower bound. Importantly, the assumption to the lower bound almost always holds, especially for big datasets. Furthermore, this model is flexible enough to balance privacy and utility: by using four parameters to characterize the assumption, there are many approaches to balance privacy and utility and to discuss the group privacy and the composition privacy properties of this model.

View on arXiv
Comments on this paper