Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1707.00424
Cited By
Parle: parallelizing stochastic gradient descent
3 July 2017
Pratik Chaudhari
Carlo Baldassi
R. Zecchina
Stefano Soatto
Ameet Talwalkar
Adam M. Oberman
ODL
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Parle: parallelizing stochastic gradient descent"
6 / 6 papers shown
Title
Decentralized Bayesian Learning over Graphs
Anusha Lalitha
Xinghan Wang
O. Kilinc
Y. Lu
T. Javidi
F. Koushanfar
FedML
28
25
0
24 May 2019
Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Jianyu Wang
Gauri Joshi
FedML
25
231
0
19 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
13
348
0
22 Aug 2018
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
45
429
0
22 Aug 2018
What does fault tolerant Deep Learning need from MPI?
Vinay C. Amatya
Abhinav Vishnu
Charles Siegel
J. Daily
10
19
0
11 Sep 2017
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
27
153
0
17 Apr 2017
1