ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.11452
  4. Cited By
Global Convergence of SGD On Two Layer Neural Nets

Global Convergence of SGD On Two Layer Neural Nets

20 October 2022
Pulkit Gopalani
Anirbit Mukherjee
ArXivPDFHTML

Papers citing "Global Convergence of SGD On Two Layer Neural Nets"

5 / 5 papers shown
Title
Training a Two Layer ReLU Network Analytically
Training a Two Layer ReLU Network Analytically
Adrian Barbu
26
5
0
06 Apr 2023
Restricted Strong Convexity of Deep Learning Models with Smooth
  Activations
Restricted Strong Convexity of Deep Learning Models with Smooth Activations
A. Banerjee
Pedro Cisneros-Velarde
Libin Zhu
M. Belkin
19
5
0
29 Sep 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
26
6
0
08 Nov 2021
Dynamics of Local Elasticity During Training of Neural Nets
Dynamics of Local Elasticity During Training of Neural Nets
Soham Dan
Anirbit Mukherjee
Avirup Das
Phanideep Gampa
12
0
0
01 Nov 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
67
44
0
04 Feb 2021
1