ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.15459
69
49
v1v2v3 (latest)

Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions

27 June 2020
Stefano Sarao Mannelli
Eric Vanden-Eijnden
Lenka Zdeborová
    AI4CE
ArXiv (abs)PDFHTML
Abstract

We study the dynamics of optimization and the generalization properties of one-hidden layer neural networks with quadratic activation function in the over-parametrized regime where the layer width mmm is larger than the input dimension ddd. We consider a teacher-student scenario where the teacher has the same structure as the student with a hidden layer of smaller width m∗≤mm^*\le mm∗≤m. We describe how the empirical loss landscape is affected by the number nnn of data samples and the width m∗m^*m∗ of the teacher network. In particular we determine how the probability that there be no spurious minima on the empirical loss depends on nnn, ddd, and m∗m^*m∗, thereby establishing conditions under which the neural network can in principle recover the teacher. We also show that under the same conditions gradient descent dynamics on the empirical loss converges and leads to small generalization error, i.e. it enables recovery in practice. Finally we characterize the time-convergence rate of gradient descent in the limit of a large number of samples. These results are confirmed by numerical experiments.

View on arXiv
Comments on this paper