ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.10688
451
34
v1v2v3v4 (latest)

On the different regimes of Stochastic Gradient Descent

Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2023
19 September 2023
Antonio Sclocchi
Matthieu Wyart
ArXiv (abs)PDFHTML
Abstract

Modern deep networks are trained with stochastic gradient descent (SGD) whose key hyperparameters are the number of data considered at each step or batch size BBB, and the step size or learning rate η\etaη. For small BBB and large η\etaη, SGD corresponds to a stochastic evolution of the parameters, whose noise amplitude is governed by the ''temperature'' T≡η/BT\equiv \eta/BT≡η/B. Yet this description is observed to break down for sufficiently large batches B≥B∗B\geq B^*B≥B∗, or simplifies to gradient descent (GD) when the temperature is sufficiently small. Understanding where these cross-overs take place remains a central challenge. Here, we resolve these questions for a teacher-student perceptron classification model and show empirically that our key predictions still apply to deep networks. Specifically, we obtain a phase diagram in the BBB-η\etaη plane that separates three dynamical phases: (i) a noise-dominated SGD governed by temperature, (ii) a large-first-step-dominated SGD and (iii) GD. These different phases also correspond to different regimes of generalization error. Remarkably, our analysis reveals that the batch size B∗B^*B∗ separating regimes (i) and (ii) scale with the size PPP of the training set, with an exponent that characterizes the hardness of the classification problem.

View on arXiv
Comments on this paper