ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.05446
  4. Cited By
On Empirical Comparisons of Optimizers for Deep Learning

On Empirical Comparisons of Optimizers for Deep Learning

11 October 2019
Dami Choi
Christopher J. Shallue
Zachary Nado
Jaehoon Lee
Chris J. Maddison
George E. Dahl
ArXivPDFHTML

Papers citing "On Empirical Comparisons of Optimizers for Deep Learning"

5 / 105 papers shown
Title
Individual predictions matter: Assessing the effect of data ordering in
  training fine-tuned CNNs for medical imaging
Individual predictions matter: Assessing the effect of data ordering in training fine-tuned CNNs for medical imaging
J. Zech
Jessica Zosa Forde
Michael L. Littman
23
5
0
08 Dec 2019
Optimizer Benchmarking Needs to Account for Hyperparameter Tuning
Optimizer Benchmarking Needs to Account for Hyperparameter Tuning
Prabhu Teja Sivaprasad
Florian Mai
Thijs Vogels
Martin Jaggi
François Fleuret
22
12
0
25 Oct 2019
Demon: Improved Neural Network Training with Momentum Decay
Demon: Improved Neural Network Training with Momentum Decay
John Chen
Cameron R. Wolfe
Zhaoqi Li
Anastasios Kyrillidis
ODL
24
15
0
11 Oct 2019
Quasi-hyperbolic momentum and Adam for deep learning
Quasi-hyperbolic momentum and Adam for deep learning
Jerry Ma
Denis Yarats
ODL
84
129
0
16 Oct 2018
A disciplined approach to neural network hyper-parameters: Part 1 --
  learning rate, batch size, momentum, and weight decay
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
208
1,020
0
26 Mar 2018
Previous
123