ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.08272
  4. Cited By
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

27 June 2015
Xiangru Lian
Yijun Huang
Y. Li
Ji Liu
ArXivPDFHTML

Papers citing "Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization"

50 / 107 papers shown
Title
Efficient Learning of Generative Models via Finite-Difference Score
  Matching
Efficient Learning of Generative Models via Finite-Difference Score Matching
Tianyu Pang
Kun Xu
Chongxuan Li
Yang Song
Stefano Ermon
Jun Zhu
DiffM
31
53
0
07 Jul 2020
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Yucheng Lu
J. Nash
Christopher De Sa
FedML
34
12
0
14 May 2020
Pipelined Backpropagation at Scale: Training Large Models without
  Batches
Pipelined Backpropagation at Scale: Training Large Models without Batches
Atli Kosson
Vitaliy Chiley
Abhinav Venigalla
Joel Hestness
Urs Koster
35
33
0
25 Mar 2020
Asynchronous and Parallel Distributed Pose Graph Optimization
Asynchronous and Parallel Distributed Pose Graph Optimization
Yulun Tian
Alec Koppel
Amrit Singh Bedi
Jonathan P. How
49
37
0
06 Mar 2020
Faster On-Device Training Using New Federated Momentum Algorithm
Faster On-Device Training Using New Federated Momentum Algorithm
Zhouyuan Huo
Qian Yang
Bin Gu
Heng-Chiao Huang
FedML
22
47
0
06 Feb 2020
Intermittent Pulling with Local Compensation for Communication-Efficient
  Federated Learning
Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning
Yining Qi
Zhihao Qu
Song Guo
Xin Gao
Ruixuan Li
Baoliu Ye
FedML
18
8
0
22 Jan 2020
Asynchronous Federated Learning with Differential Privacy for Edge
  Intelligence
Asynchronous Federated Learning with Differential Privacy for Edge Intelligence
Yanan Li
Shusen Yang
Xuebin Ren
Cong Zhao
FedML
19
33
0
17 Dec 2019
SAFA: a Semi-Asynchronous Protocol for Fast Federated Learning with Low
  Overhead
SAFA: a Semi-Asynchronous Protocol for Fast Federated Learning with Low Overhead
A. Masullo
Ligang He
Toby Perrett
Rui Mao
Carsten Maple
Majid Mirmehdi
25
301
0
03 Oct 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed
  Gradients and Compressed Communication
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
Distributed Inexact Successive Convex Approximation ADMM: Analysis-Part
  I
Distributed Inexact Successive Convex Approximation ADMM: Analysis-Part I
Sandeep Kumar
K. Rajawat
Daniel P. Palomar
32
4
0
21 Jul 2019
Faster Distributed Deep Net Training: Computation and Communication
  Decoupled Stochastic Gradient Descent
Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Shuheng Shen
Linli Xu
Jingchang Liu
Xianfeng Liang
Yifei Cheng
ODL
FedML
29
24
0
28 Jun 2019
Fully Decoupled Neural Network Learning Using Delayed Gradients
Fully Decoupled Neural Network Learning Using Delayed Gradients
Huiping Zhuang
Yi Wang
Qinglai Liu
Shuai Zhang
Zhiping Lin
FedML
25
30
0
21 Jun 2019
Layered SGD: A Decentralized and Synchronous SGD Algorithm for Scalable
  Deep Neural Network Training
Layered SGD: A Decentralized and Synchronous SGD Algorithm for Scalable Deep Neural Network Training
K. Yu
Thomas Flynn
Shinjae Yoo
N. DÍmperio
OffRL
24
6
0
13 Jun 2019
Bayesian Nonparametric Federated Learning of Neural Networks
Bayesian Nonparametric Federated Learning of Neural Networks
Mikhail Yurochkin
Mayank Agarwal
S. Ghosh
Kristjan Greenewald
T. Hoang
Y. Khazaeni
FedML
40
720
0
28 May 2019
Bandwidth Reduction using Importance Weighted Pruning on Ring AllReduce
Bandwidth Reduction using Importance Weighted Pruning on Ring AllReduce
Zehua Cheng
Zhenghua Xu
24
8
0
06 Jan 2019
Stochastic Distributed Optimization for Machine Learning from
  Decentralized Features
Stochastic Distributed Optimization for Machine Learning from Decentralized Features
Yaochen Hu
Di Niu
Jianming Yang
Shengping Zhou
11
5
0
16 Dec 2018
Asynchronous Stochastic Composition Optimization with Variance Reduction
Asynchronous Stochastic Composition Optimization with Variance Reduction
Shuheng Shen
Linli Xu
Jingchang Liu
Junliang Guo
Qing Ling
27
2
0
15 Nov 2018
MD-GAN: Multi-Discriminator Generative Adversarial Networks for
  Distributed Datasets
MD-GAN: Multi-Discriminator Generative Adversarial Networks for Distributed Datasets
Corentin Hardy
Erwan Le Merrer
B. Sericola
GAN
27
181
0
09 Nov 2018
Toward Understanding the Impact of Staleness in Distributed Machine
  Learning
Toward Understanding the Impact of Staleness in Distributed Machine Learning
Wei-Ming Dai
Yi Zhou
Nanqing Dong
Huan Zhang
Eric Xing
25
80
0
08 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
33
348
0
22 Aug 2018
Bayesian Pose Graph Optimization via Bingham Distributions and Tempered
  Geodesic MCMC
Bayesian Pose Graph Optimization via Bingham Distributions and Tempered Geodesic MCMC
Tolga Birdal
Umut Simsekli
M. Eken
Slobodan Ilic
29
38
0
31 May 2018
Double Quantization for Communication-Efficient Distributed Optimization
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed
  Learning
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen
G. Giannakis
Tao Sun
W. Yin
34
297
0
25 May 2018
Local SGD Converges Fast and Communicates Little
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
88
1,047
0
24 May 2018
Taming Convergence for Asynchronous Stochastic Gradient Descent with
  Unbounded Delay in Non-Convex Learning
Taming Convergence for Asynchronous Stochastic Gradient Descent with Unbounded Delay in Non-Convex Learning
Xin Zhang
Jia-Wei Liu
Zhengyuan Zhu
24
17
0
24 May 2018
Tell Me Something New: A New Framework for Asynchronous Parallel
  Learning
Tell Me Something New: A New Framework for Asynchronous Parallel Learning
Julaiti Alafate
Y. Freund
FedML
16
2
0
19 May 2018
Parallel and Distributed Successive Convex Approximation Methods for
  Big-Data Optimization
Parallel and Distributed Successive Convex Approximation Methods for Big-Data Optimization
G. Scutari
Ying Sun
40
61
0
17 May 2018
Differential Equations for Modeling Asynchronous Algorithms
Differential Equations for Modeling Asynchronous Algorithms
Li He
Qi Meng
Wei-neng Chen
Zhiming Ma
Tie-Yan Liu
27
9
0
08 May 2018
Adaptive Federated Learning in Resource Constrained Edge Computing
  Systems
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
Shiqiang Wang
Tiffany Tuor
Theodoros Salonidis
K. Leung
C. Makaya
T. He
Kevin S. Chan
144
1,688
0
14 Apr 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
194
0
03 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
Asynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth
  Optimization
Asynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization
Rui Zhu
Di Niu
Zongpeng Li
16
4
0
24 Feb 2018
SparCML: High-Performance Sparse Communication for Machine Learning
SparCML: High-Performance Sparse Communication for Machine Learning
Cédric Renggli
Saleh Ashkboos
Mehdi Aghagolzadeh
Dan Alistarh
Torsten Hoefler
29
126
0
22 Feb 2018
Improved asynchronous parallel optimization analysis for stochastic
  incremental methods
Improved asynchronous parallel optimization analysis for stochastic incremental methods
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
24
70
0
11 Jan 2018
AdaComp : Adaptive Residual Gradient Compression for Data-Parallel
  Distributed Training
AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Chia-Yu Chen
Jungwook Choi
D. Brand
A. Agrawal
Wei Zhang
K. Gopalakrishnan
ODL
18
173
0
07 Dec 2017
Efficient Training of Convolutional Neural Nets on Large Distributed
  Systems
Efficient Training of Convolutional Neural Nets on Large Distributed Systems
Sameer Kumar
D. Sreedhar
Vaibhav Saxena
Yogish Sabharwal
Ashish Verma
35
4
0
02 Nov 2017
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
15
522
0
26 Oct 2017
Convergence Analysis of Distributed Stochastic Gradient Descent with
  Shuffling
Convergence Analysis of Distributed Stochastic Gradient Descent with Shuffling
Qi Meng
Wei-neng Chen
Yue Wang
Zhi-Ming Ma
Tie-Yan Liu
FedML
18
101
0
29 Sep 2017
What does fault tolerant Deep Learning need from MPI?
What does fault tolerant Deep Learning need from MPI?
Vinay C. Amatya
Abhinav Vishnu
Charles Siegel
J. Daily
30
19
0
11 Sep 2017
On the convergence properties of a $K$-step averaging stochastic
  gradient descent algorithm for nonconvex optimization
On the convergence properties of a KKK-step averaging stochastic gradient descent algorithm for nonconvex optimization
Fan Zhou
Guojing Cong
46
233
0
03 Aug 2017
Byzantine-Tolerant Machine Learning
Byzantine-Tolerant Machine Learning
Peva Blanchard
El-Mahdi El-Mhamdi
R. Guerraoui
J. Stainer
OOD
FedML
38
70
0
08 Mar 2017
A Generic Online Parallel Learning Framework for Large Margin Models
A Generic Online Parallel Learning Framework for Large Margin Models
Shuming Ma
Xu Sun
FedML
18
2
0
02 Mar 2017
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
Bin Gu
Zhouyuan Huo
Heng-Chiao Huang
31
10
0
29 Oct 2016
Stochastic Gradient MCMC with Stale Gradients
Stochastic Gradient MCMC with Stale Gradients
Changyou Chen
Nan Ding
Chunyuan Li
Yizhe Zhang
Lawrence Carin
BDL
46
23
0
21 Oct 2016
Asynchronous Stochastic Gradient Descent with Delay Compensation
Asynchronous Stochastic Gradient Descent with Delay Compensation
Shuxin Zheng
Qi Meng
Taifeng Wang
Wei Chen
Nenghai Yu
Zhiming Ma
Tie-Yan Liu
32
312
0
27 Sep 2016
Asynchronous Parallel Algorithms for Nonconvex Optimization
Asynchronous Parallel Algorithms for Nonconvex Optimization
Loris Cannelli
F. Facchinei
Vyacheslav Kungurtsev
G. Scutari
30
16
0
17 Jul 2016
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
Corinna Cortes
X. Gonzalvo
Vitaly Kuznetsov
M. Mohri
Scott Yang
31
283
0
05 Jul 2016
Parallel SGD: When does averaging help?
Parallel SGD: When does averaging help?
Jian Zhang
Christopher De Sa
Ioannis Mitliagkas
Christopher Ré
MoMe
FedML
54
109
0
23 Jun 2016
ASAGA: Asynchronous Parallel SAGA
ASAGA: Asynchronous Parallel SAGA
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
AI4TS
31
101
0
15 Jun 2016
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
Stefan Hadjis
Ce Zhang
Ioannis Mitliagkas
Dan Iter
Christopher Ré
20
65
0
14 Jun 2016
Previous
123
Next