16

Why Adam Works Better with β1=β2β_1 = β_2: The Missing Gradient Scale Invariance Principle

Alberto Fernández-Hernández
Cristian Pérez-Corral
Jose I. Mestre
Manuel F. Dolz
Enrique S. Quintana-Ortí
Main:8 Pages
8 Figures
Bibliography:1 Pages
2 Tables
Appendix:14 Pages
Abstract

Adam has been at the core of large-scale training for almost a decade, yet a simple empirical fact remains unaccounted for: both validation scores and the qualitative behaviour of the training runs improve when the momentum parameters satisfy β1=β2\beta_{1}=\beta_{2}. Some recent studies have reported this pattern, but there is still no explanation for why this choice helps. We show that this choice is closely tied to a structural property that we refer to as \textit{gradient scale invariance}. We formalize this notion and prove that Adam becomes gradient scale invariant of first order if and only if β1=β2\beta_{1}=\beta_{2}. This perspective places the balanced regime of Adam in direct alignment with the design principles underlying several recent optimizers that explicitly enforce scale-robust updates. The theory is supported by experiments across vision and language tasks, and across different architectural families, in which rescaling the gradient has a markedly smoother effect on the update when β1=β2\beta_{1}=\beta_{2}. Overall, our results offer a coherent explanation for an open question in the behavior of Adam and provide a simple principle that helps guide the design of future optimizers.

View on arXiv
Comments on this paper