45
0

GCFL: A Gradient Correction-based Federated Learning Framework for Privacy-preserving CPSS

Main:9 Pages
6 Figures
Bibliography:2 Pages
Appendix:1 Pages
Abstract

Federated learning, as a distributed architecture, shows great promise for applications in Cyber-Physical-Social Systems (CPSS). In order to mitigate the privacy risks inherent in CPSS, the integration of differential privacy with federated learning has attracted considerable attention. Existing research mainly focuses on dynamically adjusting the noise added or discarding certain gradients to mitigate the noise introduced by differential privacy. However, these approaches fail to remove the noise that hinders convergence and correct the gradients affected by the noise, which significantly reduces the accuracy of model classification. To overcome these challenges, this paper proposes a novel framework for differentially private federated learning that balances rigorous privacy guarantees with accuracy by introducing a server-side gradient correction mechanism. Specifically, after clients perform gradient clipping and noise perturbation, our framework detects deviations in the noisy local gradients and employs a projection mechanism to correct them, mitigating the negative impact of noise. Simultaneously, gradient projection promotes the alignment of gradients from different clients and guides the model towards convergence to a global optimum. We evaluate our framework on several benchmark datasets, and the experimental results demonstrate that it achieves state-of-the-art performance under the same privacy budget.

View on arXiv
@article{wan2025_2506.03618,
  title={ GCFL: A Gradient Correction-based Federated Learning Framework for Privacy-preserving CPSS },
  author={ Jiayi Wan and Xiang Zhu and Fanzhen Liu and Wei Fan and Xiaolong Xu },
  journal={arXiv preprint arXiv:2506.03618},
  year={ 2025 }
}
Comments on this paper