ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.09705
54
2
v1v2v3 (latest)

Accuracy Gains from Privacy Amplification Through Sampling for Differential Privacy

17 March 2021
Jingchen Hu
Joerg Drechsler
Hang J Kim
    FedML
ArXiv (abs)PDFHTML
Abstract

Recent research in differential privacy demonstrated that (sub)sampling can amplify the level of protection. For example, for ϵ\epsilonϵ-differential privacy and simple random sampling with sampling rate rrr, the actual privacy guarantee is approximately rϵr\epsilonrϵ, if a value of ϵ\epsilonϵ is used to protect the output from the sample. In this paper, we study whether this amplification effect can be exploited systematically to improve the accuracy of the privatized estimate. Specifically, assuming the agency has information for the full population, we ask under which circumstances accuracy gains could be expected, if the privatized estimate would be computed on a random sample instead of the full population. We find that accuracy gains can be achieved for certain regimes. However, gains can typically only be expected, if the sensitivity of the output with respect to small changes in the database does not depend too strongly on the size of the database. We only focus on algorithms that achieve differential privacy by adding noise to the final output and illustrate the accuracy implications for two commonly used statistics: the mean and the median. We see our research as a first step towards understanding the conditions required for accuracy gains in practice and we hope that these findings will stimulate further research broadening the scope of differential privacy algorithms and outputs considered.

View on arXiv
Comments on this paper