ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.05350
  4. Cited By
The Error-Feedback Framework: Better Rates for SGD with Delayed
  Gradients and Compressed Communication

The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication

11 September 2019
Sebastian U. Stich
Sai Praneeth Karimireddy
    FedML
ArXivPDFHTML

Papers citing "The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication"

9 / 9 papers shown
Title
Clip21: Error Feedback for Gradient Clipping
Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat
Eduard A. Gorbunov
Samuel Horváth
Rustem Islamov
Fakhri Karray
Peter Richtárik
27
10
0
30 May 2023
On the Stability Analysis of Open Federated Learning Systems
On the Stability Analysis of Open Federated Learning Systems
Youbang Sun
H. Fernando
Tianyi Chen
Shahin Shahrampour
FedML
29
1
0
25 Sep 2022
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized
  Federated Learning with Heterogeneous Data
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Xin Zhang
Minghong Fang
Zhuqing Liu
Haibo Yang
Jia-Wei Liu
Zhengyuan Zhu
FedML
13
14
0
17 Aug 2022
Towards Federated Learning on Time-Evolving Heterogeneous Data
Towards Federated Learning on Time-Evolving Heterogeneous Data
Yongxin Guo
Tao R. Lin
Xiaoying Tang
FedML
14
30
0
25 Dec 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
43
54
0
02 Aug 2021
Fast Federated Learning in the Presence of Arbitrary Device
  Unavailability
Fast Federated Learning in the Presence of Arbitrary Device Unavailability
Xinran Gu
Kaixuan Huang
Jingzhao Zhang
Longbo Huang
FedML
22
95
0
08 Jun 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
34
108
0
15 Feb 2021
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,198
0
16 Aug 2016
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1