ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.03817
  4. Cited By
On the Linear Speedup Analysis of Communication Efficient Momentum SGD
  for Distributed Non-Convex Optimization

On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization

9 May 2019
Hao Yu
R. L. Jin
Sen Yang
    FedML
ArXivPDFHTML

Papers citing "On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization"

22 / 72 papers shown
Title
Federated Learning under Importance Sampling
Federated Learning under Importance Sampling
Elsa Rizk
Stefan Vlaski
A. H. Sayed
FedML
13
52
0
14 Dec 2020
Distributed Machine Learning for Wireless Communication Networks:
  Techniques, Architectures, and Applications
Distributed Machine Learning for Wireless Communication Networks: Techniques, Architectures, and Applications
Shuyan Hu
Xiaojing Chen
Wei Ni
E. Hossain
Xin Wang
AI4CE
42
111
0
02 Dec 2020
Federated Composite Optimization
Federated Composite Optimization
Honglin Yuan
Manzil Zaheer
Sashank J. Reddi
FedML
29
58
0
17 Nov 2020
Demystifying Why Local Aggregation Helps: Convergence Analysis of
  Hierarchical SGD
Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD
Jiayi Wang
Shiqiang Wang
Rong-Rong Chen
Mingyue Ji
FedML
28
51
0
24 Oct 2020
Communication Efficient Distributed Learning with Censored, Quantized,
  and Generalized Group ADMM
Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM
Chaouki Ben Issaid
Anis Elgabli
Jihong Park
M. Bennis
Mérouane Debbah
FedML
31
13
0
14 Sep 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized
  Training
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
15
25
0
24 Aug 2020
Stochastic Normalized Gradient Descent with Momentum for Large-Batch
  Training
Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training
Shen-Yi Zhao
Chang-Wei Shi
Yin-Peng Xie
Wu-Jun Li
ODL
18
8
0
28 Jul 2020
Fast-Convergent Federated Learning
Fast-Convergent Federated Learning
Hung T. Nguyen
Vikash Sehwag
Seyyedali Hosseinalipour
Christopher G. Brinton
M. Chiang
H. Vincent Poor
FedML
24
191
0
26 Jul 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated
  Optimization
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMe
FedML
16
1,297
0
15 Jul 2020
Robust Federated Learning: The Case of Affine Distribution Shifts
Robust Federated Learning: The Case of Affine Distribution Shifts
Amirhossein Reisizadeh
Farzan Farnia
Ramtin Pedarsani
Ali Jadbabaie
FedML
OOD
32
162
0
16 Jun 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
30
71
0
15 Jun 2020
FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
  to Non-IID Data
FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data
Xinwei Zhang
Mingyi Hong
S. Dhople
W. Yin
Yang Liu
FedML
21
227
0
22 May 2020
Communication-Efficient Distributed Stochastic AUC Maximization with
  Deep Neural Networks
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
33
42
0
05 May 2020
Buffered Asynchronous SGD for Byzantine Learning
Buffered Asynchronous SGD for Byzantine Learning
Yi-Rui Yang
Wu-Jun Li
FedML
24
5
0
02 Mar 2020
Dynamic Federated Learning
Dynamic Federated Learning
Elsa Rizk
Stefan Vlaski
A. H. Sayed
FedML
14
25
0
20 Feb 2020
Distributed Non-Convex Optimization with Sublinear Speedup under
  Intermittent Client Availability
Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability
Yikai Yan
Chaoyue Niu
Yucheng Ding
Zhenzhe Zheng
Fan Wu
Guihai Chen
Shaojie Tang
Zhihua Wu
FedML
41
37
0
18 Feb 2020
Variance Reduced Local SGD with Lower Communication Complexity
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
24
152
0
30 Dec 2019
On the Convergence of Local Descent Methods in Federated Learning
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
19
266
0
31 Oct 2019
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized
  Machine Learning
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning
Anis Elgabli
Jihong Park
Amrit Singh Bedi
Chaouki Ben Issaid
M. Bennis
Vaneet Aggarwal
24
67
0
23 Oct 2019
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
22
53
0
21 Oct 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
30
444
0
26 Sep 2019
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
Previous
12