ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11208
14
13

User-Level Private Learning via Correlated Sampling

21 October 2021
Badih Ghazi
Ravi Kumar
Pasin Manurangsi
    FedML
ArXivPDFHTML
Abstract

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample. In this work, we consider the setting where each user holds mmm samples and the privacy protection is enforced at the level of each user's data. We show that, in this setting, we may learn with a much fewer number of users. Specifically, we show that, as long as each user receives sufficiently many samples, we can learn any privately learnable class via an (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-DP algorithm using only O(log⁡(1/δ)/ϵ)O(\log(1/\delta)/\epsilon)O(log(1/δ)/ϵ) users. For ϵ\epsilonϵ-DP algorithms, we show that we can learn using only Oϵ(d)O_{\epsilon}(d)Oϵ​(d) users even in the local model, where ddd is the probabilistic representation dimension. In both cases, we show a nearly-matching lower bound on the number of users required. A crucial component of our results is a generalization of global stability [Bun et al., FOCS 2020] that allows the use of public randomness. Under this relaxed notion, we employ a correlated sampling strategy to show that the global stability can be boosted to be arbitrarily close to one, at a polynomial expense in the number of samples.

View on arXiv
Comments on this paper