ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11401
18
2

DEED: A General Quantization Scheme for Communication Efficiency in Bits

19 June 2020
Tian-Chun Ye
Peijun Xiao
Ruoyu Sun
    FedML
    MQ
ArXivPDFHTML
Abstract

In distributed optimization, a popular technique to reduce communication is quantization. In this paper, we provide a general analysis framework for inexact gradient descent that is applicable to quantization schemes. We also propose a quantization scheme Double Encoding and Error Diminishing (DEED). DEED can achieve small communication complexity in three settings: frequent-communication large-memory, frequent-communication small-memory, and infrequent-communication (e.g. federated learning). More specifically, in the frequent-communication large-memory setting, DEED can be easily combined with Nesterov's method, so that the total number of bits required is O~(κlog⁡1/ϵ)\tilde{O}( \sqrt{\kappa} \log 1/\epsilon )O~(κ​log1/ϵ), where O~\tilde{O}O~ hides numerical constant and log⁡κ\log \kappalogκ factors. In the frequent-communication small-memory setting, DEED combined with SGD only requires O~(κlog⁡1/ϵ)\tilde{O}( \kappa \log 1/\epsilon)O~(κlog1/ϵ) number of bits in the interpolation regime. In the infrequent communication setting, DEED combined with Federated averaging requires a smaller total number of bits than Federated Averaging. All these algorithms converge at the same rate as their non-quantized versions, while using a smaller number of bits.

View on arXiv
Comments on this paper