Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.07217
Cited By
Don't Use Large Mini-Batches, Use Local SGD
22 August 2018
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Don't Use Large Mini-Batches, Use Local SGD"
50 / 271 papers shown
Title
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning
Yann Fraboni
Richard Vidal
Laetitia Kameni
Marco Lorenzi
FedML
11
182
0
12 May 2021
Citadel: Protecting Data Privacy and Model Confidentiality for Collaborative Learning with SGX
Chengliang Zhang
Junzhe Xia
Baichen Yang
Huancheng Puyang
W. Wang
Ruichuan Chen
Istemi Ekin Akkus
Paarijaat Aditya
Feng Yan
FedML
53
39
0
04 May 2021
OpTorch: Optimized deep learning architectures for resource limited environments
Salman Ahmed
Hammad Naveed
17
0
0
03 May 2021
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning
He Zhu
Qing Ling
FedML
AAML
14
11
0
14 Apr 2021
Relating Adversarially Robust Generalization to Flat Minima
David Stutz
Matthias Hein
Bernt Schiele
OOD
24
65
0
09 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
21
401
0
05 Apr 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
6
4
0
28 Mar 2021
Personalized Federated Learning using Hypernetworks
Aviv Shamsian
Aviv Navon
Ethan Fetaya
Gal Chechik
FedML
25
324
0
08 Mar 2021
FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Quoc Tran-Dinh
Nhan H. Pham
Dzung Phan
Lam M. Nguyen
FedML
16
39
0
05 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
27
31
0
04 Mar 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
19
58
0
25 Feb 2021
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
Cameron R. Wolfe
Jingkang Yang
Arindam Chowdhury
Chen Dun
Artun Bayer
Santiago Segarra
Anastasios Kyrillidis
BDL
GNN
LRM
41
9
0
20 Feb 2021
Consensus Control for Decentralized Deep Learning
Lingjing Kong
Tao R. Lin
Anastasia Koloskova
Martin Jaggi
Sebastian U. Stich
19
75
0
09 Feb 2021
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
Tao R. Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
12
101
0
09 Feb 2021
Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity
Zhuoning Yuan
Zhishuai Guo
Yi Tian Xu
Yiming Ying
Tianbao Yang
FedML
13
35
0
09 Feb 2021
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
Tomoya Murata
Taiji Suzuki
FedML
14
50
0
05 Feb 2021
Truly Sparse Neural Networks at Scale
Selima Curci
D. Mocanu
Mykola Pechenizkiy
23
19
0
02 Feb 2021
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
Haibo Yang
Minghong Fang
Jia Liu
FedML
6
249
0
27 Jan 2021
Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications
Xiang Li
Zhihua Zhang
22
4
0
05 Jan 2021
CADA: Communication-Adaptive Distributed Adam
Tianyi Chen
Ziye Guo
Yuejiao Sun
W. Yin
ODL
6
24
0
31 Dec 2020
To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices
Liang Li
Dian Shi
Ronghui Hou
Hui Li
M. Pan
Zhu Han
FedML
23
147
0
22 Dec 2020
Study on the Large Batch Size Training of Neural Networks Based on the Second Order Gradient
Fengli Gao
Huicai Zhong
ODL
12
9
0
16 Dec 2020
Accurate and Fast Federated Learning via IID and Communication-Aware Grouping
Jin-Woo Lee
Jaehoon Oh
Yooju Shin
Jae-Gil Lee
Seyoul Yoon
FedML
78
16
0
09 Dec 2020
TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture
Jin-Woo Lee
Jaehoon Oh
Sungsu Lim
Se-Young Yun
Jae-Gil Lee
FedML
17
32
0
06 Dec 2020
Distributed Sparse SGD with Majority Voting
Kerem Ozfatura
Emre Ozfatura
Deniz Gunduz
FedML
38
4
0
12 Nov 2020
Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning
Nader Bouacida
Jiahui Hou
H. Zang
Xin Liu
FedML
9
75
0
08 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
28
108
0
03 Nov 2020
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal
Hongyi Wang
Kangwook Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
34
25
0
29 Oct 2020
Optimal Client Sampling for Federated Learning
Wenlin Chen
Samuel Horváth
Peter Richtárik
FedML
14
188
0
26 Oct 2020
Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD
Jiayi Wang
Shiqiang Wang
Rong-Rong Chen
Mingyue Ji
FedML
28
51
0
24 Oct 2020
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Othmane Marfoq
Chuan Xu
Giovanni Neglia
Richard Vidal
FedML
60
85
0
23 Oct 2020
Blind Federated Edge Learning
M. Amiri
T. Duman
Deniz Gunduz
Sanjeev R. Kulkarni
H. Vincent Poor
71
92
0
19 Oct 2020
Oort: Efficient Federated Learning via Guided Participant Selection
Fan Lai
Xiangfeng Zhu
H. Madhyastha
Mosharaf Chowdhury
FedML
OODD
14
226
0
12 Oct 2020
Sparse Communication for Training Deep Networks
Negar Foroutan
Martin Jaggi
FedML
14
16
0
19 Sep 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
13
24
0
24 Aug 2020
Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training
Shen-Yi Zhao
Chang-Wei Shi
Yin-Peng Xie
Wu-Jun Li
ODL
13
8
0
28 Jul 2020
Multi-Level Local SGD for Heterogeneous Hierarchical Networks
Timothy Castiglia
Anirban Das
S. Patterson
10
13
0
27 Jul 2020
CSER: Communication-efficient SGD with Error Reset
Cong Xie
Shuai Zheng
Oluwasanmi Koyejo
Indranil Gupta
Mu Li
Haibin Lin
11
41
0
26 Jul 2020
Fast-Convergent Federated Learning
Hung T. Nguyen
Vikash Sehwag
Seyyedali Hosseinalipour
Christopher G. Brinton
M. Chiang
H. Vincent Poor
FedML
24
191
0
26 Jul 2020
FetchSGD: Communication-Efficient Federated Learning with Sketching
D. Rothchild
Ashwinee Panda
Enayat Ullah
Nikita Ivkin
Ion Stoica
Vladimir Braverman
Joseph E. Gonzalez
Raman Arora
FedML
12
361
0
15 Jul 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMe
FedML
14
1,295
0
15 Jul 2020
Analyzing and Mitigating Data Stalls in DNN Training
Jayashree Mohan
Amar Phanishayee
Ashish Raniwala
Vijay Chidambaram
20
101
0
14 Jul 2020
A Study of Gradient Variance in Deep Learning
Fartash Faghri
D. Duvenaud
David J. Fleet
Jimmy Ba
FedML
ODL
14
26
0
09 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
17
271
0
02 Jul 2020
Shuffle-Exchange Brings Faster: Reduce the Idle Time During Communication for Decentralized Neural Network Training
Xiang Yang
FedML
8
2
0
01 Jul 2020
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
10
61
0
19 Jun 2020
Federated Learning With Quantized Global Model Updates
M. Amiri
Deniz Gunduz
Sanjeev R. Kulkarni
H. Vincent Poor
FedML
8
130
0
18 Jun 2020
Personalized Federated Learning with Moreau Envelopes
Canh T. Dinh
N. H. Tran
Tuan Dung Nguyen
FedML
22
966
0
16 Jun 2020
The Limit of the Batch Size
Yang You
Yuhui Wang
Huan Zhang
Zhao-jie Zhang
J. Demmel
Cho-Jui Hsieh
6
15
0
15 Jun 2020
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
25
71
0
15 Jun 2020
Previous
1
2
3
4
5
6
Next