Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.19510
Cited By
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
31 May 2023
Kedar Karhadkar
Michael Murray
Hanna Tseran
Guido Montúfar
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape"
6 / 6 papers shown
Title
Derivation of effective gradient flow equations and dynamical truncation of training data in Deep Learning
Thomas Chen
34
0
0
13 Jan 2025
Gradient flow in parameter space is equivalent to linear interpolation in output space
Thomas Chen
Patrícia Muñoz Ewald
25
1
0
02 Aug 2024
Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension
Kedar Karhadkar
Michael Murray
Guido Montúfar
32
2
0
23 May 2024
Continual Learning with Weight Interpolation
Jkedrzej Kozal
Jan Wasilewski
Bartosz Krawczyk
Michal Wo'zniak
CLL
MoMe
34
6
0
05 Apr 2024
The Real Tropical Geometry of Neural Networks
Marie-Charlotte Brandenburg
Georg Loho
Guido Montúfar
54
7
0
18 Mar 2024
Functional dimension of feedforward ReLU neural networks
J. E. Grigsby
Kathryn A. Lindsey
R. Meyerhoff
Chen-Chun Wu
27
11
0
08 Sep 2022
1