Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.09767
Cited By
Local SGD Converges Fast and Communicates Little
24 May 2018
Sebastian U. Stich
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Local SGD Converges Fast and Communicates Little"
50 / 629 papers shown
Title
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
13
24
0
24 Aug 2020
Step-Ahead Error Feedback for Distributed Training with Compressed Gradient
An Xu
Zhouyuan Huo
Heng-Chiao Huang
8
14
0
13 Aug 2020
FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching
Farzin Haddadpour
Belhal Karimi
Ping Li
Xiaoyun Li
FedML
34
31
0
11 Aug 2020
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning
Sai Praneeth Karimireddy
Martin Jaggi
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
29
214
0
08 Aug 2020
State-of-the-art Techniques in Deep Edge Intelligence
Ahnaf Hannan Lodhi
Barış Akgün
Öznur Özkasap
34
5
0
03 Aug 2020
Multi-Level Local SGD for Heterogeneous Hierarchical Networks
Timothy Castiglia
Anirban Das
S. Patterson
10
13
0
27 Jul 2020
CSER: Communication-efficient SGD with Error Reset
Cong Xie
Shuai Zheng
Oluwasanmi Koyejo
Indranil Gupta
Mu Li
Haibin Lin
11
41
0
26 Jul 2020
Fast-Convergent Federated Learning
Hung T. Nguyen
Vikash Sehwag
Seyyedali Hosseinalipour
Christopher G. Brinton
M. Chiang
H. Vincent Poor
FedML
24
191
0
26 Jul 2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
14
11
0
23 Jul 2020
Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
8
28
0
17 Jul 2020
FetchSGD: Communication-Efficient Federated Learning with Sketching
D. Rothchild
Ashwinee Panda
Enayat Ullah
Nikita Ivkin
Ion Stoica
Vladimir Braverman
Joseph E. Gonzalez
Raman Arora
FedML
12
361
0
15 Jul 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMe
FedML
14
1,295
0
15 Jul 2020
Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning
Wenqi Shi
Sheng Zhou
Z. Niu
Miao Jiang
Lu Geng
12
291
0
14 Jul 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang
Kartik K. Sreenivasan
Shashank Rajput
Harit Vishwakarma
Saurabh Agarwal
Jy-yong Sohn
Kangwook Lee
Dimitris Papailiopoulos
FedML
13
589
0
09 Jul 2020
DS-Sync: Addressing Network Bottlenecks with Divide-and-Shuffle Synchronization for Distributed DNN Training
Weiyan Wang
Cengguang Zhang
Liu Yang
Kai Chen
Kun Tan
16
12
0
07 Jul 2020
Multi-Armed Bandit Based Client Scheduling for Federated Learning
Wenchao Xia
Tony Q. S. Quek
Kun Guo
Wanli Wen
Howard H. Yang
Hongbo Zhu
FedML
37
217
0
05 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
17
271
0
02 Jul 2020
On the Outsized Importance of Learning Rates in Local Update Methods
Zachary B. Charles
Jakub Konecný
FedML
17
54
0
02 Jul 2020
Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees
Constantin Philippenko
Aymeric Dieuleveut
FedML
6
49
0
25 Jun 2020
Taming GANs with Lookahead-Minmax
Tatjana Chavdarova
Matteo Pagliardini
Sebastian U. Stich
F. Fleuret
Martin Jaggi
GAN
8
25
0
25 Jun 2020
Local Stochastic Approximation: A Unified View of Federated Learning and Distributed Multi-Task Reinforcement Learning Algorithms
Thinh T. Doan
FedML
4
9
0
24 Jun 2020
Federated Learning Meets Multi-objective Optimization
Zeou Hu
K. Shaloudegi
Guojun Zhang
Yaoliang Yu
FedML
16
89
0
20 Jun 2020
DEED: A General Quantization Scheme for Communication Efficiency in Bits
Tian-Chun Ye
Peijun Xiao
Ruoyu Sun
FedML
MQ
18
2
0
19 Jun 2020
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
13
61
0
19 Jun 2020
Federated Learning With Quantized Global Model Updates
M. Amiri
Deniz Gunduz
Sanjeev R. Kulkarni
H. Vincent Poor
FedML
8
130
0
18 Jun 2020
Communication-Efficient Robust Federated Learning Over Heterogeneous Datasets
Yanjie Dong
G. Giannakis
Tianyi Chen
Julian Cheng
Md. Jahangir Hossain
Victor C. M. Leung
FedML
6
14
0
17 Jun 2020
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
17
171
0
16 Jun 2020
Robust Federated Learning: The Case of Affine Distribution Shifts
Amirhossein Reisizadeh
Farzan Farnia
Ramtin Pedarsani
Ali Jadbabaie
FedML
OOD
30
161
0
16 Jun 2020
Personalized Federated Learning with Moreau Envelopes
Canh T. Dinh
N. H. Tran
Tuan Dung Nguyen
FedML
22
966
0
16 Jun 2020
FedGAN: Federated Generative Adversarial Networks for Distributed Data
M. Rasouli
Tao Sun
Ram Rajagopal
FedML
11
142
0
12 Jun 2020
An Accurate, Scalable and Verifiable Protocol for Federated Differentially Private Averaging
C. Sabater
A. Bellet
J. Ramon
FedML
13
18
0
12 Jun 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
20
36
0
12 Jun 2020
STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
Shuheng Shen
Yifei Cheng
Jingchang Liu
Linli Xu
LRM
6
7
0
11 Jun 2020
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Blake E. Woodworth
Kumar Kshitij Patel
Nathan Srebro
FedML
22
198
0
08 Jun 2020
UVeQFed: Universal Vector Quantization for Federated Learning
Nir Shlezinger
Mingzhe Chen
Yonina C. Eldar
H. Vincent Poor
Shuguang Cui
FedML
MQ
8
221
0
05 Jun 2020
Local SGD With a Communication Overhead Depending Only on the Number of Workers
Artin Spiridonoff
Alexander Olshevsky
I. Paschalidis
FedML
12
19
0
03 Jun 2020
FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data
Xinwei Zhang
Mingyi Hong
S. Dhople
W. Yin
Yang Liu
FedML
21
227
0
22 May 2020
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Yucheng Lu
J. Nash
Christopher De Sa
FedML
11
12
0
14 May 2020
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
6
55
0
13 May 2020
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
15
42
0
05 May 2020
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging
Shigang Li
Tal Ben-Nun
Giorgi Nadiradze
Salvatore Di Girolamo
Nikoli Dryden
Dan Alistarh
Torsten Hoefler
21
14
0
30 Apr 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
31
9
0
11 Apr 2020
Client Selection and Bandwidth Allocation in Wireless Federated Learning Networks: A Long-Term Perspective
Jie Xu
Heqiang Wang
6
348
0
09 Apr 2020
From Local SGD to Local Fixed-Point Methods for Federated Learning
Grigory Malinovsky
D. Kovalev
Elnur Gasanov
Laurent Condat
Peter Richtárik
FedML
11
114
0
03 Apr 2020
Concentrated Differentially Private and Utility Preserving Federated Learning
Rui Hu
Yuanxiong Guo
Yanmin Gong
FedML
19
12
0
30 Mar 2020
Adaptive Personalized Federated Learning
Yuyang Deng
Mohammad Mahdi Kamani
M. Mahdavi
FedML
191
542
0
30 Mar 2020
Differentially Private Federated Learning for Resource-Constrained Internet of Things
Rui Hu
Yuanxiong Guo
E. Ratazzi
Yanmin Gong
FedML
12
17
0
28 Mar 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
39
491
0
23 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
S. Shi
Wei Wang
Bo-wen Li
Xiaowen Chu
19
48
0
10 Mar 2020
Ternary Compression for Communication-Efficient Federated Learning
Jinjin Xu
W. Du
Ran Cheng
Wangli He
Yaochu Jin
MQ
FedML
34
174
0
07 Mar 2020
Previous
1
2
3
...
10
11
12
13
Next