ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06530
13
113

Label Noise SGD Provably Prefers Flat Global Minimizers

11 June 2021
Alexandru Damian
Tengyu Ma
Jason D. Lee
    NoLa
ArXivPDFHTML
Abstract

In overparametrized models, the noise in stochastic gradient descent (SGD) implicitly regularizes the optimization trajectory and determines which local minimum SGD converges to. Motivated by empirical studies that demonstrate that training with noisy labels improves generalization, we study the implicit regularization effect of SGD with label noise. We show that SGD with label noise converges to a stationary point of a regularized loss L(θ)+λR(θ)L(\theta) +\lambda R(\theta)L(θ)+λR(θ), where L(θ)L(\theta)L(θ) is the training loss, λ\lambdaλ is an effective regularization parameter depending on the step size, strength of the label noise, and the batch size, and R(θ)R(\theta)R(θ) is an explicit regularizer that penalizes sharp minimizers. Our analysis uncovers an additional regularization effect of large learning rates beyond the linear scaling rule that penalizes large eigenvalues of the Hessian more than small ones. We also prove extensions to classification with general loss functions, SGD with momentum, and SGD with general noise covariance, significantly strengthening the prior work of Blanc et al. to global convergence and large learning rates and of HaoChen et al. to general models.

View on arXiv
Comments on this paper