ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01660
  4. Cited By
Towards Understanding Acceleration Tradeoff between Momentum and
  Asynchrony in Nonconvex Stochastic Optimization
v1v2v3v4v5v6 (latest)

Towards Understanding Acceleration Tradeoff between Momentum and Asynchrony in Nonconvex Stochastic Optimization

4 June 2018
Tianyi Liu
Shiyang Li
Jianping Shi
Enlu Zhou
T. Zhao
ArXiv (abs)PDFHTML

Papers citing "Towards Understanding Acceleration Tradeoff between Momentum and Asynchrony in Nonconvex Stochastic Optimization"

4 / 4 papers shown
Title
Accelerate Distributed Stochastic Descent for Nonconvex Optimization
  with Momentum
Accelerate Distributed Stochastic Descent for Nonconvex Optimization with Momentum
Guojing Cong
Tianyi Liu
103
0
0
01 Oct 2021
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic
  Method using Deep Denoising Priors
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors
Yu Sun
Jiaming Liu
Yiran Sun
B. Wohlberg
Ulugbek S. Kamilov
78
15
0
03 Oct 2020
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Yucheng Lu
J. Nash
Christopher De Sa
FedML
91
12
0
14 May 2020
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima
  Selection in Asynchronous Training of Neural Networks?
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?
Niv Giladi
Mor Shpigel Nacson
Elad Hoffer
Daniel Soudry
80
22
0
26 Sep 2019
1