11
9

Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width

Abstract

We systematically analyze optimization dynamics in deep neural networks (DNNs) trained with stochastic gradient descent (SGD) and study the effect of learning rate η\eta, depth dd, and width ww of the neural network. By analyzing the maximum eigenvalue λtH\lambda^H_t of the Hessian of the loss, which is a measure of sharpness of the loss landscape, we find that the dynamics can show four distinct regimes: (i) an early time transient regime, (ii) an intermediate saturation regime, (iii) a progressive sharpening regime, and (iv) a late time ``edge of stability" regime. The early and intermediate regimes (i) and (ii) exhibit a rich phase diagram depending on ηc/λ0H\eta \equiv c / \lambda_0^H , dd, and ww. We identify several critical values of cc, which separate qualitatively distinct phenomena in the early time dynamics of training loss and sharpness. Notably, we discover the opening up of a ``sharpness reduction" phase, where sharpness decreases at early times, as dd and 1/w1/w are increased.

View on arXiv
Comments on this paper