ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.08841
  4. Cited By
The Convergence of Stochastic Gradient Descent in Asynchronous Shared
  Memory

The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory

23 March 2018
Dan Alistarh
Christopher De Sa
Nikola Konstantinov
ArXivPDFHTML

Papers citing "The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory"

5 / 5 papers shown
Title
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and
  Stable Convergence
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
Karl Bäckström
Ivan Walulya
Marina Papatriantafilou
P. Tsigas
29
5
0
17 Feb 2021
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Yucheng Lu
J. Nash
Christopher De Sa
FedML
32
12
0
14 May 2020
Weighted Aggregating Stochastic Gradient Descent for Parallel Deep
  Learning
Weighted Aggregating Stochastic Gradient Descent for Parallel Deep Learning
Pengzhan Guo
Zeyang Ye
Keli Xiao
Wei Zhu
24
14
0
07 Apr 2020
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep
  Learning Ads Systems
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems
Weijie Zhao
Deping Xie
Ronglai Jia
Yulei Qian
Rui Ding
Mingming Sun
P. Li
MoE
59
151
0
12 Mar 2020
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
57
429
0
22 Aug 2018
1