ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.16402
  4. Cited By
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity

GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

28 October 2022
A. Maranjyan
M. Safaryan
Peter Richtárik
ArXivPDFHTML

Papers citing "GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity"

4 / 4 papers shown
Title
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
39
3
0
07 Mar 2024
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
173
411
0
14 Jul 2021
Straggler-Resilient Federated Learning: Leveraging the Interplay Between
  Statistical Accuracy and System Heterogeneity
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity
Amirhossein Reisizadeh
Isidoros Tziotis
Hamed Hassani
Aryan Mokhtari
Ramtin Pedarsani
FedML
162
98
0
28 Dec 2020
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
77
0
23 Oct 2020
1