ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.07656
17
4

LOSSGRAD: automatic learning rate in gradient descent

20 February 2019
B. Wójcik
Lukasz Maziarka
Jacek Tabor
    ODL
ArXivPDFHTML
Abstract

In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal step-size in gradient descent), which automatically modifies the step-size in gradient descent during neural networks training. Given a function fff, a point xxx, and the gradient ∇xf\nabla_x f∇x​f of fff, we aim to find the step-size hhh which is (locally) optimal, i.e. satisfies: h=arg\,min_{t \geq 0} f(x-t \nabla_x f). Making use of quadratic approximation, we show that the algorithm satisfies the above assumption. We experimentally show that our method is insensitive to the choice of initial learning rate while achieving results comparable to other methods.

View on arXiv
Comments on this paper