Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.09873
Cited By
Neural Networks Learning and Memorization with (almost) no Over-Parameterization
22 November 2019
Amit Daniely
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Neural Networks Learning and Memorization with (almost) no Over-Parameterization"
10 / 10 papers shown
Title
Analysis of the expected
L
2
L_2
L
2
error of an over-parametrized deep neural network estimate learned by gradient descent without regularization
Selina Drews
Michael Kohler
36
2
0
24 Nov 2023
Efficient SGD Neural Network Training via Sublinear Activated Neuron Identification
Lianke Qin
Zhao Song
Yuanyuan Yang
25
9
0
13 Jul 2023
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data
Jonathan W. Siegel
14
2
0
02 Feb 2023
When Expressivity Meets Trainability: Fewer than
n
n
n
Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
26
10
0
21 Oct 2022
Size and depth of monotone neural networks: interpolation and approximation
Dan Mikulincer
Daniel Reichman
28
7
0
12 Jul 2022
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao Song
David P. Woodruff
30
15
0
26 Jun 2022
Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable
Promit Ghosal
Srinath Mahankali
Yihang Sun
MLT
26
4
0
24 May 2022
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
27
31
0
02 Nov 2021
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
47
95
0
25 Jul 2020
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
24
76
0
18 Feb 2020
1