ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.03982
30
10

On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions

6 February 2024
Yusu Hong
Junhong Lin
ArXivPDFHTML
Abstract

The Adaptive Momentum Estimation (Adam) algorithm is highly effective in training various deep learning tasks. Despite this, there's limited theoretical understanding for Adam, especially when focusing on its vanilla form in non-convex smooth scenarios with potential unbounded gradients and affine variance noise. In this paper, we study vanilla Adam under these challenging conditions. We introduce a comprehensive noise model which governs affine variance noise, bounded noise and sub-Gaussian noise. We show that Adam can find a stationary point with a O(poly(log⁡T)/T)\mathcal{O}(\text{poly}(\log T)/\sqrt{T})O(poly(logT)/T​) rate in high probability under this general noise model where TTT denotes total number iterations, matching the lower rate of stochastic first-order algorithms up to logarithm factors. More importantly, we reveal that Adam is free of tuning step-sizes with any problem-parameters, yielding a better adaptation property than the Stochastic Gradient Descent under the same conditions. We also provide a probabilistic convergence result for Adam under a generalized smooth condition which allows unbounded smoothness parameters and has been illustrated empirically to more accurately capture the smooth property of many practical objective functions.

View on arXiv
@article{hong2025_2402.03982,
  title={ On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions },
  author={ Yusu Hong and Junhong Lin },
  journal={arXiv preprint arXiv:2402.03982},
  year={ 2025 }
}
Comments on this paper