ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16297
  4. Cited By
Unbiased Compression Saves Communication in Distributed Optimization:
  When and How Much?

Unbiased Compression Saves Communication in Distributed Optimization: When and How Much?

25 May 2023
Yutong He
Xinmeng Huang
Kun Yuan
ArXivPDFHTML

Papers citing "Unbiased Compression Saves Communication in Distributed Optimization: When and How Much?"

4 / 4 papers shown
Title
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
34
3
0
07 Mar 2024
Optimal Complexity in Non-Convex Decentralized Learning over
  Time-Varying Networks
Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks
Xinmeng Huang
Kun Yuan
11
6
0
01 Nov 2022
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
46
60
0
24 Apr 2021
ADOM: Accelerated Decentralized Optimization Method for Time-Varying
  Networks
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks
D. Kovalev
Egor Shulgin
Peter Richtárik
Alexander Rogozin
Alexander Gasnikov
ODL
29
30
0
18 Feb 2021
1