ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.09056
  4. Cited By
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case
  Study for Decentralized Parallel Stochastic Gradient Descent
v1v2v3v4v5 (latest)

Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent

25 May 2017
Xiangru Lian
Ce Zhang
Huan Zhang
Cho-Jui Hsieh
Wei Zhang
Ji Liu
ArXiv (abs)PDFHTML

Papers citing "Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent"

50 / 627 papers shown
Communication-Censored Linearized ADMM for Decentralized Consensus
  Optimization
Communication-Censored Linearized ADMM for Decentralized Consensus OptimizationIEEE Transactions on Signal and Information Processing over Networks (TSIPN), 2019
Weiyu Li
Yaohua Liu
Z. Tian
Qing Ling
157
26
0
15 Sep 2019
Communication-Efficient Distributed Optimization in Networks with
  Gradient Tracking and Variance Reduction
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction
Boyue Li
Shicong Cen
Yuxin Chen
Yuejie Chi
214
12
0
12 Sep 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
169
26
0
10 Sep 2019
Distributed Deep Learning with Event-Triggered Communication
Distributed Deep Learning with Event-Triggered Communication
Jemin George
Prudhvi K. Gurram
164
16
0
08 Sep 2019
Decentralized Stochastic Gradient Tracking for Non-convex Empirical Risk
  Minimization
Decentralized Stochastic Gradient Tracking for Non-convex Empirical Risk Minimization
Jiaqi Zhang
Keyou You
328
18
0
06 Sep 2019
Federated Learning: Challenges, Methods, and Future Directions
Federated Learning: Challenges, Methods, and Future DirectionsIEEE Signal Processing Magazine (IEEE SPM), 2019
Tian Li
Anit Kumar Sahu
Ameet Talwalkar
Virginia Smith
FedML
1.6K
5,457
0
21 Aug 2019
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective OperationsACM SIGPLAN Symposium on Principles & Practice of Parallel Programming (PPoPP), 2019
Shigang Li
Tal Ben-Nun
Salvatore Di Girolamo
Dan Alistarh
Torsten Hoefler
491
64
0
12 Aug 2019
Robust and Communication-Efficient Collaborative Learning
Robust and Communication-Efficient Collaborative LearningNeural Information Processing Systems (NeurIPS), 2019
Amirhossein Reisizadeh
Hossein Taheri
Aryan Mokhtari
Hamed Hassani
Ramtin Pedarsani
325
96
0
24 Jul 2019
An introduction to decentralized stochastic optimization with gradient
  tracking
An introduction to decentralized stochastic optimization with gradient tracking
Ran Xin
S. Kar
U. Khan
278
10
0
23 Jul 2019
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized Deep Learning with Arbitrary Communication CompressionInternational Conference on Learning Representations (ICLR), 2019
Anastasia Koloskova
Tao Lin
Sebastian U. Stich
Martin Jaggi
FedML
314
253
0
22 Jul 2019
$\texttt{DeepSqueeze}$: Decentralization Meets Error-Compensated
  Compression
DeepSqueeze\texttt{DeepSqueeze}DeepSqueeze: Decentralization Meets Error-Compensated Compression
Hanlin Tang
Xiangru Lian
Delin Qu
Lei Yuan
Ce Zhang
Tong Zhang
Liu
146
53
0
17 Jul 2019
A Highly Efficient Distributed Deep Learning System For Automatic Speech
  Recognition
A Highly Efficient Distributed Deep Learning System For Automatic Speech RecognitionInterspeech (Interspeech), 2019
Wei Zhang
Xiaodong Cui
Ulrich Finkler
G. Saon
Abdullah Kayi
A. Buyuktosunoglu
Brian Kingsbury
David S. Kung
M. Picheny
120
19
0
10 Jul 2019
Data Encoding for Byzantine-Resilient Distributed Optimization
Data Encoding for Byzantine-Resilient Distributed OptimizationIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2019
Deepesh Data
Linqi Song
Suhas Diggavi
FedML
191
36
0
05 Jul 2019
Distributed Learning in Non-Convex Environments -- Part II: Polynomial
  Escape from Saddle-Points
Distributed Learning in Non-Convex Environments -- Part II: Polynomial Escape from Saddle-PointsIEEE Transactions on Signal Processing (IEEE Trans. Signal Process.), 2019
Stefan Vlaski
Ali H. Sayed
202
60
0
03 Jul 2019
Distributed Learning in Non-Convex Environments -- Part I: Agreement at
  a Linear Rate
Distributed Learning in Non-Convex Environments -- Part I: Agreement at a Linear RateIEEE Transactions on Signal Processing (IEEE Trans. Signal Process.), 2019
Stefan Vlaski
Ali H. Sayed
265
77
0
03 Jul 2019
Asymptotic Network Independence in Distributed Stochastic Optimization
  for Machine Learning
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine LearningIEEE Signal Processing Magazine (IEEE SPM), 2019
Shi Pu
Alexander Olshevsky
I. Paschalidis
495
48
0
28 Jun 2019
Layered SGD: A Decentralized and Synchronous SGD Algorithm for Scalable
  Deep Neural Network Training
Layered SGD: A Decentralized and Synchronous SGD Algorithm for Scalable Deep Neural Network Training
K. Yu
Thomas Flynn
Shinjae Yoo
N. DÍmperio
OffRL
91
6
0
13 Jun 2019
Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning
Gossip-based Actor-Learner Architectures for Deep Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2019
Mahmoud Assran
Joshua Romoff
Nicolas Ballas
Joelle Pineau
Michael G. Rabbat
178
40
0
09 Jun 2019
A Sharp Estimate on the Transient Time of Distributed Stochastic
  Gradient Descent
A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent
Shi Pu
Alexander Olshevsky
I. Paschalidis
425
18
0
06 Jun 2019
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra DataIEEE Transactions on Signal Processing (IEEE Trans. Signal Process.), 2019
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
235
29
0
29 May 2019
Accelerated Sparsified SGD with Error Feedback
Accelerated Sparsified SGD with Error Feedback
Tomoya Murata
Taiji Suzuki
176
3
0
29 May 2019
Bayesian Nonparametric Federated Learning of Neural Networks
Bayesian Nonparametric Federated Learning of Neural NetworksInternational Conference on Machine Learning (ICML), 2019
Mikhail Yurochkin
Mayank Agarwal
S. Ghosh
Kristjan Greenewald
T. Hoang
Y. Khazaeni
FedML
357
826
0
28 May 2019
An Accelerated Decentralized Stochastic Proximal Algorithm for Finite
  Sums
An Accelerated Decentralized Stochastic Proximal Algorithm for Finite SumsNeural Information Processing Systems (NeurIPS), 2019
Aymeric Dieuleveut
Francis R. Bach
Laurent Massoulie
166
33
0
27 May 2019
Decentralized Bayesian Learning over Graphs
Decentralized Bayesian Learning over Graphs
Anusha Lalitha
Xinghan Wang
O. Kilinc
Y. Lu
T. Javidi
F. Koushanfar
FedML
203
26
0
24 May 2019
Decentralized Learning of Generative Adversarial Networks from Non-iid
  Data
Decentralized Learning of Generative Adversarial Networks from Non-iid Data
Ryo Yonetani
Tomohiro Takahashi
Atsushi Hashimoto
Yoshitaka Ushiku
263
28
0
23 May 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition SamplingInternational Conference on Intelligent Cloud Computing (ICICC), 2019
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
354
175
0
23 May 2019
A Linearly Convergent Proximal Gradient Algorithm for Decentralized
  Optimization
A Linearly Convergent Proximal Gradient Algorithm for Decentralized OptimizationNeural Information Processing Systems (NeurIPS), 2019
Sulaiman A. Alghunaim
Kun Yuan
Ali H. Sayed
197
64
0
20 May 2019
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass
  Error-Compensated Compression
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated CompressionInternational Conference on Machine Learning (ICML), 2019
Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
242
231
0
15 May 2019
Budgeted Training: Rethinking Deep Neural Network Training Under
  Resource Constraints
Budgeted Training: Rethinking Deep Neural Network Training Under Resource ConstraintsInternational Conference on Learning Representations (ICLR), 2019
Mengtian Li
Ersin Yumer
Deva Ramanan
240
53
0
12 May 2019
On the Linear Speedup Analysis of Communication Efficient Momentum SGD
  for Distributed Non-Convex Optimization
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex OptimizationInternational Conference on Machine Learning (ICML), 2019
Hao Yu
Rong Jin
Sen Yang
FedML
259
418
0
09 May 2019
Optimal Statistical Rates for Decentralised Non-Parametric Regression
  with Linear Speed-Up
Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-UpNeural Information Processing Systems (NeurIPS), 2019
Dominic Richards
Patrick Rebeschini
164
14
0
08 May 2019
Communication trade-offs for synchronized distributed SGD with large
  step size
Communication trade-offs for synchronized distributed SGD with large step size
Kumar Kshitij Patel
Hadrien Hendrikx
FedML
172
27
0
25 Apr 2019
CleanML: A Study for Evaluating the Impact of Data Cleaning on ML
  Classification Tasks
CleanML: A Study for Evaluating the Impact of Data Cleaning on ML Classification Tasks
Peng Li
Susie Xi Rao
Jennifer Blase
Yue Zhang
Xu Chu
Ce Zhang
153
42
0
20 Apr 2019
Distributed Deep Learning Strategies For Automatic Speech Recognition
Distributed Deep Learning Strategies For Automatic Speech Recognition
Wei Zhang
Xiaodong Cui
Ulrich Finkler
Brian Kingsbury
G. Saon
David S. Kung
M. Picheny
138
30
0
10 Apr 2019
Scalable Deep Learning on Distributed Infrastructures: Challenges,
  Techniques and Tools
Scalable Deep Learning on Distributed Infrastructures: Challenges, Techniques and Tools
R. Mayer
Hans-Arno Jacobsen
GNN
327
211
0
27 Mar 2019
A Provably Communication-Efficient Asynchronous Distributed Inference
  Method for Convex and Nonconvex Problems
A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems
Jineng Ren
Jarvis Haupt
FedML
159
5
0
16 Mar 2019
Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max
  Problems: Algorithms and Applications
Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications
Songtao Lu
Ioannis C. Tsaknakis
Mingyi Hong
Yongxin Chen
281
182
0
21 Feb 2019
Hop: Heterogeneity-Aware Decentralized Training
Hop: Heterogeneity-Aware Decentralized Training
Qinyi Luo
Jinkun Lin
Youwei Zhuo
Xuehai Qian
278
56
0
04 Feb 2019
Decentralized Stochastic Optimization and Gossip Algorithms with
  Compressed Communication
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed CommunicationInternational Conference on Machine Learning (ICML), 2019
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
FedML
341
559
0
01 Feb 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly
  Convex Distributed Finite Sums
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Aymeric Dieuleveut
Francis R. Bach
Laurent Massoulié
FedML
203
26
0
28 Jan 2019
SGD: General Analysis and Improved Rates
SGD: General Analysis and Improved Rates
Robert Mansel Gower
Nicolas Loizou
Xun Qian
Alibek Sailanbayev
Egor Shulgin
Peter Richtárik
415
438
0
27 Jan 2019
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online
  Optimization
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization
Parvin Nazari
Davoud Ataee Tarzanagh
George Michailidis
ODL
491
87
0
25 Jan 2019
Fully Decentralized Joint Learning of Personalized Models and
  Collaboration Graphs
Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs
Valentina Zantedeschi
A. Bellet
Marc Tommasi
FedML
483
83
0
24 Jan 2019
Trajectory Normalized Gradients for Distributed Optimization
Trajectory Normalized Gradients for Distributed Optimization
Jianqiao Wangni
Ke Li
Jianbo Shi
Jitendra Malik
125
2
0
24 Jan 2019
Fully Asynchronous Distributed Optimization with Linear Convergence in
  Directed Networks
Fully Asynchronous Distributed Optimization with Linear Convergence in Directed Networks
Jiaqi Zhang
Keyou You
194
17
0
24 Jan 2019
Distributed Nesterov gradient methods over arbitrary graphs
Distributed Nesterov gradient methods over arbitrary graphs
Ran Xin
D. Jakovetić
U. Khan
129
67
0
21 Jan 2019
FPDeep: Scalable Acceleration of CNN Training on Deeply-Pipelined FPGA
  Clusters
FPDeep: Scalable Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters
Tong Geng
Tianqi Wang
Ang Li
Xi Jin
Martin C. Herbordt
FedMLGNN
244
8
0
04 Jan 2019
Clustering with Distributed Data
Clustering with Distributed Data
S. Kar
Brian Swenson
301
8
0
01 Jan 2019
Stanza: Layer Separation for Distributed Training in Deep Learning
Stanza: Layer Separation for Distributed Training in Deep Learning
Xiaorui Wu
Hongao Xu
Bo Li
Y. Xiong
MoE
125
9
0
27 Dec 2018
Wireless Network Intelligence at the Edge
Wireless Network Intelligence at the Edge
Jihong Park
S. Samarakoon
M. Bennis
Mérouane Debbah
297
557
0
07 Dec 2018
Previous
123...111213
Next
Page 12 of 13
Pageof 13