ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18502
  4. Cited By
Escaping mediocrity: how two-layer networks learn hard generalized
  linear models with SGD

Escaping mediocrity: how two-layer networks learn hard generalized linear models with SGD

29 May 2023
Luca Arnaboldi
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
    MLT
ArXivPDFHTML

Papers citing "Escaping mediocrity: how two-layer networks learn hard generalized linear models with SGD"

5 / 5 papers shown
Title
A theoretical perspective on mode collapse in variational inference
A theoretical perspective on mode collapse in variational inference
Roman Soletskyi
Marylou Gabrié
Bruno Loureiro
DRL
21
1
0
17 Oct 2024
Classifying Overlapping Gaussian Mixtures in High Dimensions: From
  Optimal Classifiers to Neural Nets
Classifying Overlapping Gaussian Mixtures in High Dimensions: From Optimal Classifiers to Neural Nets
Khen Cohen
Noam Levi
Yaron Oz
BDL
16
1
0
28 May 2024
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution
  for Weak Features
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution for Weak Features
Rodrigo Veiga
Anastasia Remizova
Nicolas Macris
19
0
0
12 Feb 2024
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
160
65
0
27 Oct 2022
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
56
114
0
02 May 2018
1