Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.09767
Cited By
Local SGD Converges Fast and Communicates Little
24 May 2018
Sebastian U. Stich
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Local SGD Converges Fast and Communicates Little"
50 / 629 papers shown
Title
Towards Heterogeneous Clients with Elastic Federated Learning
Zichen Ma
Yu Lu
Zihan Lu
Wenye Li
Jinfeng Yi
Shuguang Cui
FedML
16
3
0
17 Jun 2021
Robust Training in High Dimensions via Block Coordinate Geometric Median Descent
Anish Acharya
Abolfazl Hashemi
Prateek Jain
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
12
33
0
16 Jun 2021
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities
Aleksandr Beznosikov
Pavel Dvurechensky
Anastasia Koloskova
V. Samokhin
Sebastian U. Stich
Alexander Gasnikov
26
43
0
15 Jun 2021
CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning
Haibo Yang
Jia Liu
Elizabeth S. Bentley
FedML
21
16
0
14 Jun 2021
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
30
288
0
11 Jun 2021
Efficient and Less Centralized Federated Learning
Li Chou
Zichang Liu
Zhuang Wang
Anshumali Shrivastava
FedML
11
17
0
11 Jun 2021
Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning
Bokun Wang
Zhuoning Yuan
Yiming Ying
Tianbao Yang
FedML
48
9
0
09 Jun 2021
Communication-efficient SGD: From Local SGD to One-Shot Averaging
Artin Spiridonoff
Alexander Olshevsky
I. Paschalidis
FedML
29
20
0
09 Jun 2021
Fast Federated Learning in the Presence of Arbitrary Device Unavailability
Xinran Gu
Kaixuan Huang
Jingzhao Zhang
Longbo Huang
FedML
27
95
0
08 Jun 2021
Asynchronous speedup in decentralized optimization
Mathieu Even
Hadrien Hendrikx
Laurent Massoulie
13
4
0
07 Jun 2021
Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning
Jinhyun So
Ramy E. Ali
Başak Güler
Jiantao Jiao
Salman Avestimehr
FedML
37
77
0
07 Jun 2021
Distributed Learning and its Application for Time-Series Prediction
Nhuong V. Nguyen
Sybille Legitime
AI4TS
6
0
0
06 Jun 2021
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
Gihun Lee
Minchan Jeong
Yongjin Shin
Sangmin Bae
Se-Young Yun
FedML
20
115
0
06 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
25
77
0
05 Jun 2021
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Chaoyang He
Emir Ceyani
Keshav Balasubramanian
M. Annavaram
Salman Avestimehr
FedML
19
50
0
04 Jun 2021
Escaping Saddle Points with Compressed SGD
Dmitrii Avdiukhin
G. Yaroslavtsev
6
4
0
21 May 2021
Accelerating Gossip SGD with Periodic Global Averaging
Yiming Chen
Kun Yuan
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
29
41
0
19 May 2021
Removing Data Heterogeneity Influence Enhances Network Topology Dependence of Decentralized SGD
Kun Yuan
Sulaiman A. Alghunaim
Xinmeng Huang
18
32
0
17 May 2021
LocalNewton: Reducing Communication Bottleneck for Distributed Learning
Vipul Gupta
Avishek Ghosh
Michal Derezinski
Rajiv Khanna
K. Ramchandran
Michael W. Mahoney
35
12
0
16 May 2021
Node Selection Toward Faster Convergence for Federated Learning on Non-IID Data
Hongda Wu
Ping Wang
FedML
13
135
0
14 May 2021
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning
Yann Fraboni
Richard Vidal
Laetitia Kameni
Marco Lorenzi
FedML
13
184
0
12 May 2021
Federated Learning with Unreliable Clients: Performance Analysis and Mechanism Design
Chuan Ma
Jun Li
Ming Ding
Kang Wei
Wen Chen
H. Vincent Poor
FedML
24
28
0
10 May 2021
Federated Face Recognition
Fan Bai
Jiaxiang Wu
Pengcheng Shen
Shaoxin Li
Shuigeng Zhou
CVBM
FedML
15
13
0
06 May 2021
OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed Learning
Shuo Wang
Surya Nepal
Kristen Moore
M. Grobler
Carsten Rudolph
A. Abuadbba
FedML
32
8
0
03 May 2021
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
55
61
0
24 Apr 2021
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning
He Zhu
Qing Ling
FedML
AAML
22
11
0
14 Apr 2021
Towards Explainable Multi-Party Learning: A Contrastive Knowledge Sharing Framework
Yuan Gao
Jiawei Li
Maoguo Gong
Yu Xie
•. A. K. Qin
6
0
0
14 Apr 2021
Accelerated Gradient Tracking over Time-varying Graphs for Decentralized Optimization
Huan Li
Zhouchen Lin
14
31
0
06 Apr 2021
Federated Learning with Taskonomy for Non-IID Data
Hadi Jamali Rad
Mohammad Abdizadeh
Anuj Singh
FedML
40
54
0
29 Mar 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
14
4
0
28 Mar 2021
Hierarchical Federated Learning with Quantization: Convergence Analysis and System Design
Lumin Liu
Jun Zhang
Shenghui Song
Khaled B. Letaief
FedML
31
78
0
26 Mar 2021
The Gradient Convergence Bound of Federated Multi-Agent Reinforcement Learning with Efficient Communication
Xing Xu
Rongpeng Li
Zhifeng Zhao
Honggang Zhang
30
11
0
24 Mar 2021
Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning
Zachary B. Charles
Jakub Konecný
FedML
21
62
0
08 Mar 2021
Personalized Federated Learning using Hypernetworks
Aviv Shamsian
Aviv Navon
Ethan Fetaya
Gal Chechik
FedML
35
324
0
08 Mar 2021
FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Quoc Tran-Dinh
Nhan H. Pham
Dzung Phan
Lam M. Nguyen
FedML
16
39
0
05 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
31
0
04 Mar 2021
FedPower: Privacy-Preserving Distributed Eigenspace Estimation
Xiaoxun Guo
Xiang Li
Xiangyu Chang
Shusen Wang
Zhihua Zhang
FedML
16
3
0
01 Mar 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
30
58
0
25 Feb 2021
Distributionally Robust Federated Averaging
Yuyang Deng
Mohammad Mahdi Kamani
M. Mahdavi
FedML
13
140
0
25 Feb 2021
Sustainable Federated Learning
Başak Güler
Aylin Yener
12
13
0
22 Feb 2021
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
Cameron R. Wolfe
Jingkang Yang
Arindam Chowdhury
Chen Dun
Artun Bayer
Santiago Segarra
Anastasios Kyrillidis
BDL
GNN
LRM
46
9
0
20 Feb 2021
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
Filip Hanzely
Boxin Zhao
Mladen Kolar
FedML
24
52
0
19 Feb 2021
Peering Beyond the Gradient Veil with Distributed Auto Differentiation
Bradley T. Baker
Aashis Khanal
Vince D. Calhoun
Barak A. Pearlmutter
Sergey Plis
21
1
0
18 Feb 2021
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
Karl Bäckström
Ivan Walulya
Marina Papatriantafilou
P. Tsigas
21
5
0
17 Feb 2021
Oscars: Adaptive Semi-Synchronous Parallel Model for Distributed Deep Learning with Global View
Sheng-Jun Huang
11
0
0
17 Feb 2021
Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach
Junshan Zhang
Na Li
M. Dedeoglu
FedML
31
41
0
16 Feb 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
M. Safaryan
Filip Hanzely
Peter Richtárik
14
24
0
14 Feb 2021
Task-oriented Communication Design in Cyber-Physical Systems: A Survey on Theory and Applications
Arsham Mostaani
T. Vu
Shree Krishna Sharma
Van-Dinh Nguyen
Qi Liao
Symeon Chatzinotas
27
16
0
14 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
29
51
0
14 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Previous
1
2
3
...
10
11
12
13
8
9
Next