ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.00424
  4. Cited By
Parle: parallelizing stochastic gradient descent

Parle: parallelizing stochastic gradient descent

3 July 2017
Pratik Chaudhari
Carlo Baldassi
R. Zecchina
Stefano Soatto
Ameet Talwalkar
Adam M. Oberman
    ODL
    FedML
ArXivPDFHTML

Papers citing "Parle: parallelizing stochastic gradient descent"

6 / 6 papers shown
Title
Decentralized Bayesian Learning over Graphs
Decentralized Bayesian Learning over Graphs
Anusha Lalitha
Xinghan Wang
O. Kilinc
Y. Lu
T. Javidi
F. Koushanfar
FedML
28
25
0
24 May 2019
Adaptive Communication Strategies to Achieve the Best Error-Runtime
  Trade-off in Local-Update SGD
Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Jianyu Wang
Gauri Joshi
FedML
25
231
0
19 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
15
348
0
22 Aug 2018
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
51
429
0
22 Aug 2018
What does fault tolerant Deep Learning need from MPI?
What does fault tolerant Deep Learning need from MPI?
Vinay C. Amatya
Abhinav Vishnu
Charles Siegel
J. Daily
12
19
0
11 Sep 2017
Deep Relaxation: partial differential equations for optimizing deep
  neural networks
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
27
153
0
17 Apr 2017
1