ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03665
  4. Cited By
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with
  Communication Compression

Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression

8 June 2022
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
ArXivPDFHTML

Papers citing "Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression"

7 / 7 papers shown
Title
Accelerated Distributed Optimization with Compression and Error Feedback
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
43
0
0
11 Mar 2025
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
34
3
0
07 Mar 2024
CEDAS: A Compressed Decentralized Stochastic Gradient Method with
  Improved Convergence
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
22
9
0
14 Jan 2023
Optimal Complexity in Non-Convex Decentralized Learning over
  Time-Varying Networks
Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks
Xinmeng Huang
Kun Yuan
19
6
0
01 Nov 2022
DASHA: Distributed Nonconvex Optimization with Communication
  Compression, Optimal Oracle Complexity, and No Client Synchronization
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization
A. Tyurin
Peter Richtárik
32
17
0
02 Feb 2022
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
42
44
0
07 Oct 2021
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
46
60
0
24 Apr 2021
1