ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03043
49
1

Leveraging Randomness in Model and Data Partitioning for Privacy Amplification

4 March 2025
Andy Dong
Wei-Ning Chen
Ayfer Özgür
    FedML
ArXivPDFHTML
Abstract

We study how inherent randomness in the training process -- where each sample (or client in federated learning) contributes only to a randomly selected portion of training -- can be leveraged for privacy amplification. This includes (1) data partitioning, where a sample participates in only a subset of training iterations, and (2) model partitioning, where a sample updates only a subset of the model parameters. We apply our framework to model parallelism in federated learning, where each client updates a randomly selected subnetwork to reduce memory and computational overhead, and show that existing methods, e.g. model splitting or dropout, provide a significant privacy amplification gain not captured by previous privacy analysis techniques. Additionally, we introduce Balanced Iteration Subsampling, a new data partitioning method where each sample (or client) participates in a fixed number of training iterations. We show that this method yields stronger privacy amplification than Poisson (i.i.d.) sampling of data (or clients). Our results demonstrate that randomness in the training process, which is structured rather than i.i.d. and interacts with data in complex ways, can be systematically leveraged for significant privacy amplification.

View on arXiv
@article{dong2025_2503.03043,
  title={ Leveraging Randomness in Model and Data Partitioning for Privacy Amplification },
  author={ Andy Dong and Wei-Ning Chen and Ayfer Ozgur },
  journal={arXiv preprint arXiv:2503.03043},
  year={ 2025 }
}
Comments on this paper