ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.00700
  4. Cited By
Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM

Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM

2 August 2019
Qianqian Tong
Guannan Liang
J. Bi
ArXivPDFHTML

Papers citing "Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM"

2 / 2 papers shown
Title
On the Algorithmic Stability and Generalization of Adaptive Optimization
  Methods
On the Algorithmic Stability and Generalization of Adaptive Optimization Methods
Han Nguyen
Hai Pham
Sashank J. Reddi
Barnabás Póczos
ODL
AI4CE
15
2
0
08 Nov 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
1