ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.04655
6
47

Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning

12 November 2019
XINYAN DAI
Xiao Yan
Kaiwen Zhou
Han Yang
K. K. Ng
James Cheng
Yu Fan
    FedML
ArXivPDFHTML
Abstract

The high cost of communicating gradients is a major bottleneck for federated learning, as the bandwidth of the participating user devices is limited. Existing gradient compression algorithms are mainly designed for data centers with high-speed network and achieve O(dlog⁡d)O(\sqrt{d} \log d)O(d​logd) per-iteration communication cost at best, where ddd is the size of the model. We propose hyper-sphere quantization (HSQ), a general framework that can be configured to achieve a continuum of trade-offs between communication efficiency and gradient accuracy. In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of O(log⁡d)O(\log d)O(logd), which is favorable for federated learning. We prove the convergence of HSQ theoretically and show by experiments that HSQ significantly reduces the communication cost of model training without hurting convergence accuracy.

View on arXiv
Comments on this paper