Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.11245
Cited By
v1
v2 (latest)
Deep Networks and the Multiple Manifold Problem
25 August 2020
Sam Buchanan
D. Gilboa
John N. Wright
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Deep Networks and the Multiple Manifold Problem"
40 / 40 papers shown
Title
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
180
97
0
25 Jul 2020
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics
Taiji Suzuki
56
21
0
11 Jul 2020
When Do Neural Networks Outperform Kernel Methods?
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
117
189
0
24 Jun 2020
Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
Surbhi Goel
Aravind Gollakota
Zhihan Jin
Sushrut Karmalkar
Adam R. Klivans
MLT
ODL
71
72
0
22 Jun 2020
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
68
171
0
11 Jun 2020
Deep Learning Techniques for Inverse Problems in Imaging
Greg Ongie
A. Jalal
Christopher A. Metzler
Richard G. Baraniuk
A. Dimakis
Rebecca Willett
101
535
0
12 May 2020
Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
Tengyuan Liang
Hai Tran-Bach
46
11
0
09 Apr 2020
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
210
241
0
04 Mar 2020
The Two Regimes of Deep Network Training
Guillaume Leclerc
Aleksander Madry
92
45
0
24 Feb 2020
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
80
172
0
21 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
170
341
0
11 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
66
22
0
10 Feb 2020
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
65
116
0
03 Oct 2019
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Ziwei Ji
Matus Telgarsky
98
178
0
26 Sep 2019
Modelling the influence of data structure on learning in neural networks: the hidden manifold model
Sebastian Goldt
M. Mézard
Florent Krzakala
Lenka Zdeborová
BDL
78
51
0
25 Sep 2019
Asymptotics of Wide Networks from Feynman Diagrams
Ethan Dyer
Guy Gur-Ari
100
115
0
25 Sep 2019
Dynamics of Deep Neural Networks and Neural Tangent Hierarchy
Jiaoyang Huang
H. Yau
62
151
0
18 Sep 2019
Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks
Yuanzhi Li
Colin Wei
Tengyu Ma
82
299
0
10 Jul 2019
Robust and interpretable blind image denoising via bias-free convolutional neural networks
S. Mohan
Zahra Kadkhodaie
Eero P. Simoncelli
C. Fernandez‐Granda
AI4CE
90
125
0
13 Jun 2019
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
85
88
0
12 Jun 2019
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
Yuan Cao
Quanquan Gu
MLT
AI4CE
107
392
0
30 May 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
416
183
0
24 May 2019
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
93
243
0
27 Apr 2019
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
249
928
0
26 Apr 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
218
1,111
0
18 Feb 2019
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
90
279
0
16 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
230
974
0
24 Jan 2019
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
217
775
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
301
1,470
0
09 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
248
1,276
0
04 Oct 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
319
3,225
0
20 Jun 2018
Rate-Optimal Denoising with Deep Neural Networks
Reinhard Heckel
Wen Huang
Paul Hand
V. Voroninski
64
23
0
22 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
111
863
0
18 Apr 2018
Deep Image Prior
Dmitry Ulyanov
Andrea Vedaldi
Victor Lempitsky
SupR
163
3,169
0
29 Nov 2017
Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance
Jonathan Niles-Weed
Francis R. Bach
216
421
0
01 Jul 2017
Compressed Sensing using Generative Models
Ashish Bora
A. Jalal
Eric Price
A. Dimakis
182
813
0
09 Mar 2017
Group Equivariant Convolutional Networks
Taco S. Cohen
Max Welling
BDL
184
1,947
0
24 Feb 2016
Concentration inequalities for order statistics
S. Boucheron
Maud Thomas
164
82
0
31 Jul 2012
Invariant Scattering Convolution Networks
Joan Bruna
S. Mallat
138
1,279
0
05 Mar 2012
Group Invariant Scattering
S. Mallat
135
993
0
12 Jan 2011
1