ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.06451
111
253
v1v2v3 (latest)

A Bayesian Perspective on Generalization and Stochastic Gradient Descent

17 October 2017
Samuel L. Smith
Quoc V. Le
    BDL
ArXiv (abs)PDFHTML
Abstract

This paper tackles two related questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work is inspired by Zhang et al. (2017), who showed deep networks can easily memorize randomly labeled training data, despite generalizing well when shown real labels of the same inputs. We show here that the same phenomenon occurs in small linear models. These observations are explained by evaluating the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also explore the "generalization gap" between small and large batch training, identifying an optimum batch size which maximizes the test set accuracy. Interpreting stochastic gradient descent as a stochastic differential equation, we identify a "noise scale" g=ϵ(NB−1)≈ϵN/Bg = \epsilon (\frac{N}{B} - 1) \approx \epsilon N/Bg=ϵ(BN​−1)≈ϵN/B, where ϵ\epsilonϵ is the learning rate, NNN training set size and BBB batch size. Consequently the optimum batch size is proportional to the learning rate and the training set size, Bopt∝ϵNB_{opt} \propto \epsilon NBopt​∝ϵN. We verify these predictions empirically.

View on arXiv
Comments on this paper