Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.06561
Cited By
A Mean Field View of the Landscape of Two-Layers Neural Networks
18 April 2018
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Mean Field View of the Landscape of Two-Layers Neural Networks"
50 / 182 papers shown
Title
The loss landscape of deep linear neural networks: a second-order analysis
E. M. Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
22
9
0
28 Jul 2021
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
Yossi Arjevani
M. Field
28
18
0
21 Jul 2021
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations, and Anomalous Diffusion
D. Kunin
Javier Sagastuy-Breña
Lauren Gillespie
Eshed Margalit
Hidenori Tanaka
Surya Ganguli
Daniel L. K. Yamins
28
15
0
19 Jul 2021
Dual Training of Energy-Based Models with Overparametrized Shallow Neural Networks
Carles Domingo-Enrich
A. Bietti
Marylou Gabrié
Joan Bruna
Eric Vanden-Eijnden
FedML
30
6
0
11 Jul 2021
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
Dominik Stöger
Mahdi Soltanolkotabi
ODL
25
74
0
28 Jun 2021
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
15
25
0
25 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
26
24
0
11 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Bill Li
Mihai Nica
Daniel M. Roy
23
33
0
07 Jun 2021
Learning particle swarming models from data with Gaussian processes
Jinchao Feng
Charles Kulick
Yunxiang Ren
Sui Tang
26
5
0
04 Jun 2021
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
38
19
0
11 May 2021
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
M. Wyart
OOD
23
15
0
06 May 2021
Two-layer neural networks with values in a Banach space
Yury Korolev
21
23
0
05 May 2021
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
31
13
0
29 Apr 2021
Deep limits and cut-off phenomena for neural networks
B. Avelin
A. Karlsson
AI4CE
30
2
0
21 Apr 2021
The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation
Q. Du
Yiqi Gu
Haizhao Yang
Chao Zhou
21
20
0
21 Mar 2021
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
27
93
0
02 Mar 2021
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
18
2
0
26 Feb 2021
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
18
57
0
25 Feb 2021
Wasserstein Proximal of GANs
A. Lin
Wuchen Li
Stanley Osher
Guido Montúfar
GAN
11
46
0
13 Feb 2021
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
122
165
0
29 Jan 2021
A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Equations
Jianfeng Lu
Yulong Lu
Min Wang
23
37
0
05 Jan 2021
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Neural collapse with unconstrained features
D. Mixon
Hans Parshall
Jianzong Pi
6
114
0
23 Nov 2020
Reliable Off-policy Evaluation for Reinforcement Learning
Jie Wang
Rui Gao
H. Zha
OffRL
17
11
0
08 Nov 2020
Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime
Andrea Agazzi
Jianfeng Lu
13
15
0
22 Oct 2020
Prediction intervals for Deep Neural Networks
Tullio Mancini
Hector F. Calvo-Pardo
Jose Olmo
UQCV
OOD
13
4
0
08 Oct 2020
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
12
86
0
30 Sep 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
16
36
0
28 Sep 2020
Machine Learning and Computational Mathematics
Weinan E
PINN
AI4CE
16
61
0
23 Sep 2020
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
47
34
0
22 Jul 2020
Maximum likelihood estimation of potential energy in interacting particle systems from single-trajectory data
Xiaohui Chen
28
25
0
21 Jul 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
16
25
0
13 Jul 2020
Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory
Tao Luo
Haizhao Yang
11
73
0
28 Jun 2020
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
33
100
0
25 Jun 2020
Representation formulas and pointwise properties for Barron functions
E. Weinan
Stephan Wojtowytsch
18
79
0
10 Jun 2020
Machine Learning and Control Theory
A. Bensoussan
Yiqun Li
Dinh Phan Cao Nguyen
M. Tran
S. Yam
Xiang Zhou
AI4CE
24
12
0
10 Jun 2020
A Survey on Generative Adversarial Networks: Variants, Applications, and Training
Abdul Jabbar
Xi Li
Bourahla Omar
25
266
0
09 Jun 2020
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Yufeng Zhang
Qi Cai
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
OOD
MLT
58
11
0
08 Jun 2020
Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean field training perspective
Stephan Wojtowytsch
E. Weinan
MLT
13
48
0
21 May 2020
Symmetry & critical points for a model shallow neural network
Yossi Arjevani
M. Field
26
13
0
23 Mar 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
31
78
0
11 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
A Spectral Analysis of Dot-product Kernels
M. Scetbon
Zaïd Harchaoui
110
2
0
28 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
16
327
0
11 Feb 2020
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Ginevra Carbone
Matthew Wicker
Luca Laurenti
A. Patané
Luca Bortolussi
G. Sanguinetti
AAML
24
77
0
11 Feb 2020
Inference in Multi-Layer Networks with Matrix-Valued Unknowns
Parthe Pandit
Mojtaba Sahraee-Ardakan
S. Rangan
P. Schniter
A. Fletcher
18
6
0
26 Jan 2020
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
11
47
0
21 Jan 2020
Mean-Field and Kinetic Descriptions of Neural Differential Equations
Michael Herty
T. Trimborn
G. Visconti
20
6
0
07 Jan 2020
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
25
19
0
31 Dec 2019
Previous
1
2
3
4
Next