ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12543
11
0

Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling

14 June 2025
Teodora Srećković
Jonas Geiping
Antonio Orvieto
    MoE
ArXiv (abs)PDFHTML
Main:12 Pages
20 Figures
Bibliography:3 Pages
1 Tables
Appendix:5 Pages
Abstract

Adam is known to perform significantly better than Stochastic Gradient Descent (SGD) in language models, a phenomenon for which a number of explanations have been proposed. In this work, we revisit this "optimizer gap" through a series of comprehensively tuned baseline training runs for language modeling with Transformers. We exhaustively study how momentum, gradient clipping, and batch size affect the gap between SGD and Adam. Our empirical findings show that SGD with momentum can actually perform similarly to Adam in small-batch settings, if tuned correctly. We revisit existing explanations for Adam's advantage, including heavy-tailed class imbalance, directional sharpness, and Hessian heterogeneity, which struggle to directly explain this phenomenon. Towards bridging this gap in our understanding, by analyzing our Transformer training runs and simple quadratic settings inspired by the literature, we provide new insights, driven by stochastic differential equation models, into the role of batch size on the training dynamics.

View on arXiv
@article{srećković2025_2506.12543,
  title={ Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling },
  author={ Teodora Srećković and Jonas Geiping and Antonio Orvieto },
  journal={arXiv preprint arXiv:2506.12543},
  year={ 2025 }
}
Comments on this paper