ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.02677
11
50

Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds

7 May 2018
Santosh Vempala
John Wilmes
    MLT
ArXivPDFHTML
Abstract

We study the complexity of training neural network models with one hidden nonlinear activation layer and an output weighted sum layer. We analyze Gradient Descent applied to learning a bounded target function on nnn real-valued inputs. We give an agnostic learning guarantee for GD: starting from a randomly initialized network, it converges in mean squared loss to the minimum error (in 222-norm) of the best approximation of the target function using a polynomial of degree at most kkk. Moreover, for any kkk, the size of the network and number of iterations needed are both bounded by nO(k)log⁡(1/ϵ)n^{O(k)}\log(1/\epsilon)nO(k)log(1/ϵ). In particular, this applies to training networks of unbiased sigmoids and ReLUs. We also rigorously explain the empirical finding that gradient descent discovers lower frequency Fourier components before higher frequency components. We complement this result with nearly matching lower bounds in the Statistical Query model. GD fits well in the SQ framework since each training step is determined by an expectation over the input distribution. We show that any SQ algorithm that achieves significant improvement over a constant function with queries of tolerance some inverse polynomial in the input dimensionality nnn must use nΩ(k)n^{\Omega(k)}nΩ(k) queries even when the target functions are restricted to a set of nO(k)n^{O(k)}nO(k) degree-kkk polynomials, and the input distribution is uniform over the unit sphere; for this class the information-theoretic lower bound is only Θ(klog⁡n)\Theta(k \log n)Θ(klogn). Our approach for both parts is based on spherical harmonics. We view gradient descent as an operator on the space of functions, and study its dynamics. An essential tool is the Funk-Hecke theorem, which explains the eigenfunctions of this operator in the case of the mean squared loss.

View on arXiv
Comments on this paper