ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.14863
299
48

Neural Networks Efficiently Learn Low-Dimensional Representations with SGD

29 September 2022
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
    MLT
ArXivPDFHTML
Abstract

We study the problem of training a two-layer neural network (NN) of arbitrary width using stochastic gradient descent (SGD) where the input x∈Rd\boldsymbol{x}\in \mathbb{R}^dx∈Rd is Gaussian and the target y∈Ry \in \mathbb{R}y∈R follows a multiple-index model, i.e., y=g(⟨u1,x⟩,...,⟨uk,x⟩)y=g(\langle\boldsymbol{u_1},\boldsymbol{x}\rangle,...,\langle\boldsymbol{u_k},\boldsymbol{x}\rangle)y=g(⟨u1​,x⟩,...,⟨uk​,x⟩) with a noisy link function ggg. We prove that the first-layer weights of the NN converge to the kkk-dimensional principal subspace spanned by the vectors u1,...,uk\boldsymbol{u_1},...,\boldsymbol{u_k}u1​,...,uk​ of the true model, when online SGD with weight decay is used for training. This phenomenon has several important consequences when k≪dk \ll dk≪d. First, by employing uniform convergence on this smaller subspace, we establish a generalization error bound of O(kd/T)O(\sqrt{{kd}/{T}})O(kd/T​) after TTT iterations of SGD, which is independent of the width of the NN. We further demonstrate that, SGD-trained ReLU NNs can learn a single-index target of the form y=f(⟨u,x⟩)+ϵy=f(\langle\boldsymbol{u},\boldsymbol{x}\rangle) + \epsilony=f(⟨u,x⟩)+ϵ by recovering the principal direction, with a sample complexity linear in ddd (up to log factors), where fff is a monotonic function with at most polynomial growth, and ϵ\epsilonϵ is the noise. This is in contrast to the known dΩ(p)d^{\Omega(p)}dΩ(p) sample requirement to learn any degree ppp polynomial in the kernel regime, and it shows that NNs trained with SGD can outperform the neural tangent kernel at initialization. Finally, we also provide compressibility guarantees for NNs using the approximate low-rank structure produced by SGD.

View on arXiv
Comments on this paper