ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.01422
  4. Cited By
An Adaptive Remote Stochastic Gradient Method for Training Neural
  Networks
v1v2v3v4v5v6v7v8 (latest)

An Adaptive Remote Stochastic Gradient Method for Training Neural Networks

4 May 2019
Yushu Chen
Hao Jing
Wenlai Zhao
Zhiqiang Liu
Haohuan Fu
Lián Qiao
Wei Xue
Guangwen Yang
    ODL
ArXiv (abs)PDFHTML

Papers citing "An Adaptive Remote Stochastic Gradient Method for Training Neural Networks"

2 / 2 papers shown
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
808
186
0
03 Jul 2020
On Empirical Comparisons of Optimizers for Deep Learning
On Empirical Comparisons of Optimizers for Deep Learning
Dami Choi
Christopher J. Shallue
Zachary Nado
Jaehoon Lee
Chris J. Maddison
George E. Dahl
459
289
0
11 Oct 2019
1
Page 1 of 1