ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.00222
18
1561

Federated Learning with Differential Privacy: Algorithms and Performance Analysis

1 November 2019
Kang Wei
Jun Li
Ming Ding
Chuan Ma
H. Yang
Farokhi Farhad
Shi Jin
Tony Q. S. Quek
H. Vincent Poor
    FedML
ArXivPDFHTML
Abstract

In this paper, to effectively prevent information leakage, we propose a novel framework based on the concept of differential privacy (DP), in which artificial noises are added to the parameters at the clients side before aggregating, namely, noising before model aggregation FL (NbAFL). First, we prove that the NbAFL can satisfy DP under distinct protection levels by properly adapting different variances of artificial noises. Then we develop a theoretical convergence bound of the loss function of the trained FL model in the NbAFL. Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i.e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number NNN of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level. Furthermore, we propose a KKK-random scheduling strategy, where KKK (1<K<N1<K<N1<K<N) clients are randomly selected from the NNN overall clients to participate in each aggregation. We also develop the corresponding convergence bound of the loss function in this case and the KKK-random scheduling strategy can also retain the above three properties. Moreover, we find that there is an optimal KKK that achieves the best convergence performance at a fixed privacy level. Evaluations demonstrate that our theoretical results are consistent with simulations, thereby facilitating the designs on various privacy-preserving FL algorithms with different tradeoff requirements on convergence performance and privacy levels.

View on arXiv
Comments on this paper