Dyn-DP: Dynamic Differentially Private Decentralized Learning with Provable Utility Guarantee

Most existing decentralized learning methods with differential privacy (DP) guarantee rely on constant gradient clipping bounds and fixed-level DP Gaussian noises for each node throughout the training process, leading to a significant accuracy degradation compared to non-private counterparts. In this paper, we propose a new Dynamic Differentially Private Decentralized learning approach (termed Dyn-DP) tailored for general time-varying directed networks. Leveraging the Gaussian DP (GDP) framework for privacy accounting, Dyn-DP dynamically adjusts gradient clipping bounds and noise levels based on gradient convergence. This proposed dynamic noise strategy enables us to enhance model accuracy while preserving the total privacy budget. Extensive experiments on benchmark datasets demonstrate the superiority of Dyn-DP over its counterparts employing fixed-level noises, especially under strong privacy guarantees. Furthermore, we provide a provable utility bound for Dyn-DP that establishes an explicit dependency on network-related parameters, with a scaling factor of in terms of the number of nodes up to a bias error term induced by gradient clipping. To our knowledge, this is the first model utility analysis for differentially private decentralized non-convex optimization with dynamic gradient clipping bounds and noise levels.
View on arXiv@article{zhu2025_2505.06651, title={ Dyn-D$^2$P: Dynamic Differentially Private Decentralized Learning with Provable Utility Guarantee }, author={ Zehan Zhu and Yan Huang and Xin Wang and Shouling Ji and Jinming Xu }, journal={arXiv preprint arXiv:2505.06651}, year={ 2025 } }