ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12479
14
0

γγγ-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning

18 May 2025
Rongwei Lu
Yutong Jiang
Jinrui Zhang
Chunyang Li
Yifei Zhu
Bin Chen
Zhi Wang
    FedML
ArXivPDFHTML
Abstract

Gradient compression can effectively alleviate communication bottlenecks in Federated Learning (FL). Contemporary state-of-the-art sparse compressors, such as Top-kkk, exhibit high computational complexity, up to O(dlog⁡2k)\mathcal{O}(d\log_2{k})O(dlog2​k), where ddd is the number of model parameters. The hard-threshold compressor, which simply transmits elements with absolute values higher than a fixed threshold, is thus proposed to reduce the complexity to O(d)\mathcal{O}(d)O(d). However, the hard-threshold compression causes accuracy degradation in FL, where the datasets are non-IID and the stepsize γ\gammaγ is decreasing for model convergence. The decaying stepsize reduces the updates and causes the compression ratio of the hard-threshold compression to drop rapidly to an aggressive ratio. At or below this ratio, the model accuracy has been observed to degrade severely. To address this, we propose γ\gammaγ-FedHT, a stepsize-aware low-cost compressor with Error-Feedback to guarantee convergence. Given that the traditional theoretical framework of FL does not consider Error-Feedback, we introduce the fundamental conversation of Error-Feedback. We prove that γ\gammaγ-FedHT has the convergence rate of O(1T)\mathcal{O}(\frac{1}{T})O(T1​) (TTT representing total training iterations) under μ\muμ-strongly convex cases and O(1T)\mathcal{O}(\frac{1}{\sqrt{T}})O(T​1​) under non-convex cases, \textit{same as FedAVG}. Extensive experiments demonstrate that γ\gammaγ-FedHT improves accuracy by up to 7.42%7.42\%7.42% over Top-kkk under equal communication traffic on various non-IID image datasets.

View on arXiv
@article{lu2025_2505.12479,
  title={ $γ$-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning },
  author={ Rongwei Lu and Yutong Jiang and Jinrui Zhang and Chunyang Li and Yifei Zhu and Bin Chen and Zhi Wang },
  journal={arXiv preprint arXiv:2505.12479},
  year={ 2025 }
}
Comments on this paper