141
v1v2 (latest)

Adam Simplified: Bias Correction Debunked

Main:4 Pages
8 Figures
Bibliography:2 Pages
Appendix:5 Pages
Abstract

The Adam optimizer is a cornerstone of modern deep learning, yet the empirical necessity of each of its individual components is often taken for granted. This paper presents a focused investigation into the role of bias-correction, a feature whose contribution remains poorly understood. Through a series of systematic ablations on vision and language modelling tasks, we demonstrate that the conventional wisdom surrounding bias correction is misleading. In particular, we demonstrate that in the optimal hyper-parameter configuration, the inclusion of bias correction leads to no improvement in final test performance. Moreover, unless appropriate learning rate scheduling is implemented, the inclusion of bias correction can sometimes be detrimental to performance. We further reinterpret bias correction as a form of implicit learning rate scheduling whose behaviour is strongly dependent on the choice of smoothing hyper-parameters β1,β2[0,1)\beta_1, \beta_2 \in [0,1). Our findings challenge the universal inclusion of this component.

View on arXiv
Comments on this paper