ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.09461
25
29

CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression

20 July 2021
Zhize Li
Peter Richtárik
ArXivPDFHTML
Abstract

Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov's accelerated gradient descent (Nesterov, 1983, 2004) and Adam (Kingma and Ba, 2014). In order to combine the benefits of communication compression and convergence acceleration, we propose a \emph{compressed and accelerated} gradient method based on ANITA (Li, 2021) for distributed optimization, which we call CANITA. Our CANITA achieves the \emph{first accelerated rate} O((1+ω3n)Lϵ+ω(1ϵ)13)O\bigg(\sqrt{\Big(1+\sqrt{\frac{\omega^3}{n}}\Big)\frac{L}{\epsilon}} + \omega\big(\frac{1}{\epsilon}\big)^{\frac{1}{3}}\bigg)O((1+nω3​​)ϵL​​+ω(ϵ1​)31​), which improves upon the state-of-the-art non-accelerated rate O((1+ωn)Lϵ+ω2+ωω+n1ϵ)O\left((1+\frac{\omega}{n})\frac{L}{\epsilon} + \frac{\omega^2+\omega}{\omega+n}\frac{1}{\epsilon}\right)O((1+nω​)ϵL​+ω+nω2+ω​ϵ1​) of DIANA (Khaled et al., 2020) for distributed general convex problems, where ϵ\epsilonϵ is the target error, LLL is the smooth parameter of the objective, nnn is the number of machines/devices, and ω\omegaω is the compression parameter (larger ω\omegaω means more compression can be applied, and no compression implies ω=0\omega=0ω=0). Our results show that as long as the number of devices nnn is large (often true in distributed/federated learning), or the compression ω\omegaω is not very high, CANITA achieves the faster convergence rate O(Lϵ)O\Big(\sqrt{\frac{L}{\epsilon}}\Big)O(ϵL​​), i.e., the number of communication rounds is O(Lϵ)O\Big(\sqrt{\frac{L}{\epsilon}}\Big)O(ϵL​​) (vs. O(Lϵ)O\big(\frac{L}{\epsilon}\big)O(ϵL​) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds).

View on arXiv
Comments on this paper