10
10

When and Why Momentum Accelerates SGD:An Empirical Study

Abstract

Momentum has become a crucial component in deep learning optimizers, necessitating a comprehensive understanding of when and why it accelerates stochastic gradient descent (SGD). To address the question of ''when'', we establish a meaningful comparison framework that examines the performance of SGD with Momentum (SGDM) under the \emph{effective learning rates} ηef\eta_{ef}, a notion unifying the influence of momentum coefficient μ\mu and batch size bb over learning rate η\eta. In the comparison of SGDM and SGD with the same effective learning rate and the same batch size, we observe a consistent pattern: when ηef\eta_{ef} is small, SGDM and SGD experience almost the same empirical training losses; when ηef\eta_{ef} surpasses a certain threshold, SGDM begins to perform better than SGD. Furthermore, we observe that the advantage of SGDM over SGD becomes more pronounced with a larger batch size. For the question of ``why'', we find that the momentum acceleration is closely related to \emph{abrupt sharpening} which is to describe a sudden jump of the directional Hessian along the update direction. Specifically, the misalignment between SGD and SGDM happens at the same moment that SGD experiences abrupt sharpening and converges slower. Momentum improves the performance of SGDM by preventing or deferring the occurrence of abrupt sharpening. Together, this study unveils the interplay between momentum, learning rates, and batch sizes, thus improving our understanding of momentum acceleration.

View on arXiv
Comments on this paper