73
20

Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit

Abstract

We study the problem of gradient descent learning of a single-index target function f(x)=σ(x,θ)f_*(\boldsymbol{x}) = \textstyle\sigma_*\left(\langle\boldsymbol{x},\boldsymbol{\theta}\rangle\right) under isotropic Gaussian data in Rd\mathbb{R}^d, where the link function σ:RR\sigma_*:\mathbb{R}\to\mathbb{R} is an unknown degree qq polynomial with information exponent pp (defined as the lowest degree in the Hermite expansion). Prior works showed that gradient-based training of neural networks can learn this target with ndΘ(p)n\gtrsim d^{\Theta(p)} samples, and such statistical complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm learns ff_* of arbitrary polynomial link function with a sample and runtime complexity of nTC(q)dpolylogdn \asymp T \asymp C(q) \cdot d\mathrm{polylog} d, where constant C(q)C(q) only depends on the degree of σ\sigma_*, regardless of information exponent; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.

View on arXiv
Comments on this paper