ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.10174
  4. Cited By
SGD Learns Over-parameterized Networks that Provably Generalize on
  Linearly Separable Data

SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data

27 October 2017
Alon Brutzkus
Amir Globerson
Eran Malach
Shai Shalev-Shwartz
    MLT
ArXivPDFHTML

Papers citing "SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data"

2 / 52 papers shown
Title
Gradient descent with identity initialization efficiently learns
  positive definite linear transformations by deep residual networks
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter L. Bartlett
D. Helmbold
Philip M. Long
23
116
0
16 Feb 2018
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
Previous
12