ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.05074
  4. Cited By
L4: Practical loss-based stepsize adaptation for deep learning

L4: Practical loss-based stepsize adaptation for deep learning

14 February 2018
Michal Rolínek
Georg Martius
    ODL
ArXivPDFHTML

Papers citing "L4: Practical loss-based stepsize adaptation for deep learning"

5 / 5 papers shown
Title
An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton
  Stepsizes
An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes
Antonio Orvieto
Lin Xiao
30
2
0
05 Jul 2024
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
38
4
0
06 Jun 2024
Amortized Proximal Optimization
Amortized Proximal Optimization
Juhan Bae
Paul Vicol
Jeff Z. HaoChen
Roger C. Grosse
ODL
15
14
0
28 Feb 2022
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
A. Davtyan
Sepehr Sameni
L. Cerkezi
Givi Meishvili
Adam Bielski
Paolo Favaro
ODL
36
2
0
07 Jul 2021
LOSSGRAD: automatic learning rate in gradient descent
LOSSGRAD: automatic learning rate in gradient descent
B. Wójcik
Lukasz Maziarka
Jacek Tabor
ODL
17
4
0
20 Feb 2019
1