ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09411
54
1

Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization

13 March 2025
Amit Attia
Tomer Koren
ArXivPDFHTML
Abstract

The learning rate in stochastic gradient methods is a critical hyperparameter that is notoriously costly to tune via standard grid search, especially for training modern large-scale models with billions of parameters. We identify a theoretical advantage of learning rate annealing schemes that decay the learning rate to zero at a polynomial rate, such as the widely-used cosine schedule, by demonstrating their increased robustness to initial parameter misspecification due to a coarse grid search. We present an analysis in a stochastic convex optimization setup demonstrating that the convergence rate of stochastic gradient descent with annealed schedules depends sublinearly on the multiplicative misspecification factor ρ\rhoρ (i.e., the grid resolution), achieving a rate of O(ρ1/(2p+1)/T)O(\rho^{1/(2p+1)}/\sqrt{T})O(ρ1/(2p+1)/T​) where ppp is the degree of polynomial decay and TTT is the number of steps, in contrast to the O(ρ/T)O(\rho/\sqrt{T})O(ρ/T​) rate that arises with fixed stepsizes and exhibits a linear dependence on ρ\rhoρ. Experiments confirm the increased robustness compared to tuning with a fixed stepsize, that has significant implications for the computational overhead of hyperparameter search in practical training scenarios.

View on arXiv
@article{attia2025_2503.09411,
  title={ Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization },
  author={ Amit Attia and Tomer Koren },
  journal={arXiv preprint arXiv:2503.09411},
  year={ 2025 }
}
Comments on this paper