ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.04190
45
22

On the Power of Differentiable Learning versus PAC and SQ Learning

9 August 2021
Emmanuel Abbe
Pritish Kamath
Eran Malach
Colin Sandon
Nathan Srebro
    MLT
ArXivPDFHTML
Abstract

We study the power of learning via mini-batch stochastic gradient descent (SGD) on the population loss, and batch Gradient Descent (GD) on the empirical loss, of a differentiable model or neural network, and ask what learning problems can be learnt using these paradigms. We show that SGD and GD can always simulate learning with statistical queries (SQ), but their ability to go beyond that depends on the precision ρ\rhoρ of the gradient calculations relative to the minibatch size bbb (for SGD) and sample size mmm (for GD). With fine enough precision relative to minibatch size, namely when bρb \rhobρ is small enough, SGD can go beyond SQ learning and simulate any sample-based learning algorithm and thus its learning power is equivalent to that of PAC learning; this extends prior work that achieved this result for b=1b=1b=1. Similarly, with fine enough precision relative to the sample size mmm, GD can also simulate any sample-based learning algorithm based on mmm samples. In particular, with polynomially many bits of precision (i.e. when ρ\rhoρ is exponentially small), SGD and GD can both simulate PAC learning regardless of the mini-batch size. On the other hand, when bρ2b \rho^2bρ2 is large enough, the power of SGD is equivalent to that of SQ learning.

View on arXiv
Comments on this paper