Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1808.07217
Cited By
v1
v2
v3
v4
v5
v6 (latest)
Don't Use Large Mini-Batches, Use Local SGD
22 August 2018
Tao Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Don't Use Large Mini-Batches, Use Local SGD"
50 / 280 papers shown
Title
Optimality and Stability in Federated Learning: A Game-theoretic Approach
Neural Information Processing Systems (NeurIPS), 2021
Kate Donahue
Jon M. Kleinberg
FedML
144
63
0
17 Jun 2021
Towards Heterogeneous Clients with Elastic Federated Learning
Zichen Ma
Yu Lu
Zihan Lu
Wenye Li
Jinfeng Yi
Shuguang Cui
FedML
135
3
0
17 Jun 2021
On Large-Cohort Training for Federated Learning
Neural Information Processing Systems (NeurIPS), 2021
Zachary B. Charles
Zachary Garrett
Zhouyuan Huo
Sergei Shmulyian
Virginia Smith
FedML
210
117
0
15 Jun 2021
CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning
International Symposium on Modeling and Optimization in Mobile, Ad-Hoc and Wireless Networks (WiOpt), 2021
Haibo Yang
Jia Liu
Elizabeth S. Bentley
FedML
77
22
0
14 Jun 2021
Federated Learning with Buffered Asynchronous Aggregation
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
319
389
0
11 Jun 2021
Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning
Journal of machine learning research (JMLR), 2021
Bokun Wang
Zhuoning Yuan
Yiming Ying
Tianbao Yang
FedML
264
14
0
09 Jun 2021
Communication-efficient SGD: From Local SGD to One-Shot Averaging
Neural Information Processing Systems (NeurIPS), 2021
Artin Spiridonoff
Alexander Olshevsky
I. Paschalidis
FedML
260
23
0
09 Jun 2021
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Chaoyang He
Emir Ceyani
Keshav Balasubramanian
M. Annavaram
Salman Avestimehr
FedML
157
61
0
04 Jun 2021
On Linear Stability of SGD and Input-Smoothness of Neural Networks
Neural Information Processing Systems (NeurIPS), 2021
Chao Ma
Lexing Ying
MLT
144
54
0
27 May 2021
Fast Federated Learning by Balancing Communication Trade-Offs
IEEE Transactions on Communications (IEEE Trans. Commun.), 2021
Milad Khademi Nori
Sangseok Yun
Il-Min Kim
FedML
124
66
0
23 May 2021
Accelerating Gossip SGD with Periodic Global Averaging
International Conference on Machine Learning (ICML), 2021
Yiming Chen
Kun Yuan
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
160
46
0
19 May 2021
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
139
138
0
17 May 2021
LocalNewton: Reducing Communication Bottleneck for Distributed Learning
Vipul Gupta
Avishek Ghosh
Michal Derezinski
Rajiv Khanna
Kannan Ramchandran
Michael W. Mahoney
168
13
0
16 May 2021
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning
International Conference on Machine Learning (ICML), 2021
Yann Fraboni
Richard Vidal
Laetitia Kameni
Marco Lorenzi
FedML
170
234
0
12 May 2021
Citadel: Protecting Data Privacy and Model Confidentiality for Collaborative Learning with SGX
ACM Symposium on Cloud Computing (SoCC), 2021
Chengliang Zhang
Junzhe Xia
Baichen Yang
Huancheng Puyang
Wei Wang
Ruichuan Chen
Istemi Ekin Akkus
Paarijaat Aditya
Feng Yan
FedML
189
43
0
04 May 2021
OpTorch: Optimized deep learning architectures for resource limited environments
Salman Ahmed
Hammad Naveed
110
0
0
03 May 2021
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning
IEEE Transactions on Signal and Information Processing over Networks (TSIPN), 2021
He Zhu
Qing Ling
FedML
AAML
202
19
0
14 Apr 2021
Relating Adversarially Robust Generalization to Flat Minima
IEEE International Conference on Computer Vision (ICCV), 2021
David Stutz
Matthias Hein
Bernt Schiele
OOD
251
78
0
09 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
IEEE Journal on Selected Areas in Communications (JSAC), 2021
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
229
486
0
05 Apr 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
102
4
0
28 Mar 2021
Personalized Federated Learning using Hypernetworks
International Conference on Machine Learning (ICML), 2021
Aviv Shamsian
Aviv Navon
Ethan Fetaya
Gal Chechik
FedML
349
406
0
08 Mar 2021
FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Neural Information Processing Systems (NeurIPS), 2021
Quoc Tran-Dinh
Nhan H. Pham
Dzung Phan
Lam M. Nguyen
FedML
313
46
0
05 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Neural Information Processing Systems (NeurIPS), 2021
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
328
44
0
04 Mar 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Yuyang Deng
M. Mahdavi
279
69
0
25 Feb 2021
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
Journal of Applied and Computational Topology (JACT), 2021
Cameron R. Wolfe
Jingkang Yang
Arindam Chowdhury
Chen Dun
Artun Bayer
Santiago Segarra
Anastasios Kyrillidis
BDL
GNN
LRM
260
11
0
20 Feb 2021
Consensus Control for Decentralized Deep Learning
International Conference on Machine Learning (ICML), 2021
Lingjing Kong
Tao Lin
Anastasia Koloskova
Martin Jaggi
Sebastian U. Stich
191
89
0
09 Feb 2021
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
International Conference on Machine Learning (ICML), 2021
Tao Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
260
106
0
09 Feb 2021
Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity
International Conference on Machine Learning (ICML), 2021
Zhuoning Yuan
Zhishuai Guo
Yi Tian Xu
Yiming Ying
Tianbao Yang
FedML
215
36
0
09 Feb 2021
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
International Conference on Machine Learning (ICML), 2021
Tomoya Murata
Taiji Suzuki
FedML
186
55
0
05 Feb 2021
Truly Sparse Neural Networks at Scale
Selima Curci
Decebal Constantin Mocanu
Mykola Pechenizkiy
297
23
0
02 Feb 2021
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
International Conference on Learning Representations (ICLR), 2021
Haibo Yang
Minghong Fang
Jia Liu
FedML
291
286
0
27 Jan 2021
Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications
Xiang Li
Zhihua Zhang
127
4
0
05 Jan 2021
CADA: Communication-Adaptive Distributed Adam
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Tianyi Chen
Ziye Guo
Yuejiao Sun
W. Yin
ODL
136
24
0
31 Dec 2020
To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices
IEEE Conference on Computer Communications (INFOCOM), 2020
Liang Li
Dian Shi
Ronghui Hou
Hui Li
Miao Pan
Zhu Han
FedML
159
170
0
22 Dec 2020
Study on the Large Batch Size Training of Neural Networks Based on the Second Order Gradient
Fengli Gao
Huicai Zhong
ODL
91
10
0
16 Dec 2020
Accurate and Fast Federated Learning via IID and Communication-Aware Grouping
Jin-Woo Lee
Jaehoon Oh
Yooju Shin
Jae-Gil Lee
Seyoul Yoon
FedML
178
17
0
09 Dec 2020
TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture
Jin-Woo Lee
Jaehoon Oh
Sungsu Lim
Se-Young Yun
Jae-Gil Lee
FedML
271
39
0
06 Dec 2020
Distributed Sparse SGD with Majority Voting
Kerem Ozfatura
Emre Ozfatura
Deniz Gunduz
FedML
144
4
0
12 Nov 2020
Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning
Nader Bouacida
Jiahui Hou
H. Zang
Xin Liu
FedML
189
90
0
08 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
208
119
0
03 Nov 2020
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal
Hongyi Wang
Kangwook Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
167
26
0
29 Oct 2020
Optimal Client Sampling for Federated Learning
Jiajun He
Samuel Horváth
Peter Richtárik
FedML
321
223
0
26 Oct 2020
Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD
AAAI Conference on Artificial Intelligence (AAAI), 2020
Jiayi Wang
Maroun Touma
Rong-Rong Chen
Mingyue Ji
FedML
261
72
0
24 Oct 2020
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Neural Information Processing Systems (NeurIPS), 2020
Othmane Marfoq
Chuan Xu
Giovanni Neglia
Richard Vidal
FedML
294
106
0
23 Oct 2020
Blind Federated Edge Learning
IEEE Transactions on Wireless Communications (TWC), 2020
M. Amiri
T. Duman
Deniz Gunduz
Sanjeev R. Kulkarni
H. Vincent Poor
239
97
0
19 Oct 2020
Oort: Efficient Federated Learning via Guided Participant Selection
Fan Lai
Xiangfeng Zhu
H. Madhyastha
Mosharaf Chowdhury
FedML
OODD
428
312
0
12 Oct 2020
Sparse Communication for Training Deep Networks
Negar Foroutan
Martin Jaggi
FedML
133
19
0
19 Sep 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
129
26
0
24 Aug 2020
Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training
Science China Information Sciences (Sci China Inf Sci), 2020
Shen-Yi Zhao
Chang-Wei Shi
Yin-Peng Xie
Wu-Jun Li
ODL
202
10
0
28 Jul 2020
Multi-Level Local SGD for Heterogeneous Hierarchical Networks
International Conference on Learning Representations (ICLR), 2020
Timothy Castiglia
Anirban Das
S. Patterson
193
12
0
27 Jul 2020
Previous
1
2
3
4
5
6
Next