ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.13307
  4. Cited By
CORE: Common Random Reconstruction for Distributed Optimization with
  Provable Low Communication Complexity

CORE: Common Random Reconstruction for Distributed Optimization with Provable Low Communication Complexity

23 September 2023
Pengyun Yue
Hanzheng Zhao
Cong Fang
Di He
Liwei Wang
Zhouchen Lin
Song-Chun Zhu
ArXivPDFHTML

Papers citing "CORE: Common Random Reconstruction for Distributed Optimization with Provable Low Communication Complexity"

8 / 8 papers shown
Title
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Zhe Li
Bicheng Ying
Zidong Liu
Haibo Yang
Haibo Yang
FedML
59
3
0
24 May 2024
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
A. Maranjyan
M. Safaryan
Peter Richtárik
26
13
0
28 Oct 2022
EF21-P and Friends: Improved Theoretical Communication Complexity for
  Distributed Optimization with Bidirectional Compression
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression
Kaja Gruntkowska
A. Tyurin
Peter Richtárik
36
21
0
30 Sep 2022
DASHA: Distributed Nonconvex Optimization with Communication
  Compression, Optimal Oracle Complexity, and No Client Synchronization
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization
A. Tyurin
Peter Richtárik
32
17
0
02 Feb 2022
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
42
44
0
07 Oct 2021
When is the Convergence Time of Langevin Algorithms Dimension
  Independent? A Composite Optimization Viewpoint
When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint
Y. Freund
Yi-An Ma
Tong Zhang
18
16
0
05 Oct 2021
DRIVE: One-bit Distributed Mean Estimation
DRIVE: One-bit Distributed Mean Estimation
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
OOD
FedML
66
51
0
18 May 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
1