14
17

On the different regimes of Stochastic Gradient Descent

Abstract

Modern deep networks are trained with stochastic gradient descent (SGD) whose key hyperparameters are the number of data considered at each step or batch size BB, and the step size or learning rate η\eta. For small BB and large η\eta, SGD corresponds to a stochastic evolution of the parameters, whose noise amplitude is governed by the ''temperature'' Tη/BT\equiv \eta/B. Yet this description is observed to break down for sufficiently large batches BBB\geq B^*, or simplifies to gradient descent (GD) when the temperature is sufficiently small. Understanding where these cross-overs take place remains a central challenge. Here, we resolve these questions for a teacher-student perceptron classification model and show empirically that our key predictions still apply to deep networks. Specifically, we obtain a phase diagram in the BB-η\eta plane that separates three dynamical phases: (i) a noise-dominated SGD governed by temperature, (ii) a large-first-step-dominated SGD and (iii) GD. These different phases also correspond to different regimes of generalization error. Remarkably, our analysis reveals that the batch size BB^* separating regimes (i) and (ii) scale with the size PP of the training set, with an exponent that characterizes the hardness of the classification problem.

View on arXiv
Comments on this paper