Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.09372
Cited By
Mean Field Analysis of Neural Networks: A Central Limit Theorem
28 August 2018
Justin A. Sirignano
K. Spiliopoulos
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mean Field Analysis of Neural Networks: A Central Limit Theorem"
47 / 47 papers shown
Title
Mirror Mean-Field Langevin Dynamics
Anming Gu
Juno Kim
31
0
0
05 May 2025
Don't be lazy: CompleteP enables compute-efficient deep transformers
Nolan Dey
Bin Claire Zhang
Lorenzo Noci
Mufan Bill Li
Blake Bordelon
Shane Bergsma
C. Pehlevan
Boris Hanin
Joel Hestness
39
0
0
02 May 2025
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Ziang Chen
Rong Ge
MLT
59
1
0
10 Jan 2025
Extended convexity and smoothness and their applications in deep learning
Binchuan Qi
Wei Gong
Li Li
61
0
0
08 Oct 2024
Symmetries in Overparametrized Neural Networks: A Mean-Field View
Javier Maass Martínez
Joaquin Fontbona
FedML
MLT
38
2
0
30 May 2024
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Luca Arnaboldi
Yatin Dandi
Florent Krzakala
Luca Pesce
Ludovic Stephan
61
12
0
24 May 2024
Mean-field underdamped Langevin dynamics and its spacetime discretization
Qiang Fu
Ashia Wilson
34
4
0
26 Dec 2023
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
34
19
0
11 Oct 2023
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
23
11
0
12 Jul 2023
Understanding the Initial Condensation of Convolutional Neural Networks
Zhangchen Zhou
Hanxu Zhou
Yuqing Li
Zhi-Qin John Xu
MLT
AI4CE
23
5
0
17 May 2023
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
28
5
0
03 Apr 2023
Phase Diagram of Initial Condensation for Two-layer Neural Networks
Zheng Chen
Yuqing Li
Tao Luo
Zhaoguang Zhou
Z. Xu
MLT
AI4CE
43
8
0
12 Mar 2023
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems
Atsushi Nitanda
Kazusato Oko
Denny Wu
Nobuhito Takenouchi
Taiji Suzuki
24
3
0
06 Mar 2023
Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic Gradient Descent
Benjamin Gess
Sebastian Kassing
Vitalii Konarovskyi
DiffM
26
6
0
14 Feb 2023
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
Luca Arnaboldi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
MLT
30
31
0
12 Feb 2023
An Analysis of Attention via the Lens of Exchangeability and Latent Variable Models
Yufeng Zhang
Boyi Liu
Qi Cai
Lingxiao Wang
Zhaoran Wang
45
11
0
30 Dec 2022
Proximal Mean Field Learning in Shallow Neural Networks
Alexis M. H. Teter
Iman Nodozi
A. Halder
FedML
40
1
0
25 Oct 2022
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
20
5
0
20 Oct 2022
Neural parameter calibration for large-scale multi-agent models
Thomas Gaskin
G. Pavliotis
Mark Girolami
AI4TS
23
23
0
27 Sep 2022
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
Andrew M. Saxe
Shagun Sodhani
Sam Lewallen
AI4CE
28
34
0
21 Jul 2022
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
17
112
0
30 Jun 2022
High-dimensional limit theorems for SGD: Effective dynamics and critical scaling
Gerard Ben Arous
Reza Gheissari
Aukosh Jagannath
49
59
0
08 Jun 2022
Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Hanxu Zhou
Qixuan Zhou
Zhenyuan Jin
Tao Luo
Yaoyu Zhang
Zhi-Qin John Xu
22
20
0
24 May 2022
Mean-Field Nonparametric Estimation of Interacting Particle Systems
Rentian Yao
Xiaohui Chen
Yun Yang
43
9
0
16 May 2022
Provably convergent quasistatic dynamics for mean-field two-player zero-sum games
Chao Ma
Lexing Ying
MLT
24
11
0
15 Feb 2022
Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks
R. Veiga
Ludovic Stephan
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
MLT
10
31
0
01 Feb 2022
Convex Analysis of the Mean Field Langevin Dynamics
Atsushi Nitanda
Denny Wu
Taiji Suzuki
MLT
59
64
0
25 Jan 2022
Overview frequency principle/spectral bias in deep learning
Z. Xu
Yaoyu Zhang
Tao Luo
FaML
25
65
0
19 Jan 2022
Asymptotic properties of one-layer artificial neural networks with sparse connectivity
Christian Hirsch
Matthias Neumann
Volker Schmidt
11
1
0
01 Dec 2021
Parallel Deep Neural Networks Have Zero Duality Gap
Yifei Wang
Tolga Ergen
Mert Pilanci
79
10
0
13 Oct 2021
Dual Training of Energy-Based Models with Overparametrized Shallow Neural Networks
Carles Domingo-Enrich
A. Bietti
Marylou Gabrié
Joan Bruna
Eric Vanden-Eijnden
FedML
32
6
0
11 Jul 2021
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
Dominik Stöger
Mahdi Soltanolkotabi
ODL
31
74
0
28 Jun 2021
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
122
165
0
29 Jan 2021
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Machine Learning and Computational Mathematics
Weinan E
PINN
AI4CE
21
61
0
23 Sep 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
16
25
0
13 Jul 2020
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
33
100
0
25 Jun 2020
What Do Neural Networks Learn When Trained With Random Labels?
Hartmut Maennel
Ibrahim M. Alabdulmohsin
Ilya O. Tolstikhin
R. Baldock
Olivier Bousquet
Sylvain Gelly
Daniel Keysers
FedML
40
86
0
18 Jun 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
31
78
0
11 Mar 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
16
327
0
11 Feb 2020
Mean-Field and Kinetic Descriptions of Neural Differential Equations
Michael Herty
T. Trimborn
G. Visconti
28
6
0
07 Jan 2020
Machine Learning from a Continuous Viewpoint
E. Weinan
Chao Ma
Lei Wu
21
102
0
30 Dec 2019
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
39
624
0
14 Aug 2019
Theory of the Frequency Principle for General Deep Neural Networks
Tao Luo
Zheng Ma
Zhi-Qin John Xu
Yaoyu Zhang
18
78
0
21 Jun 2019
Maximum Mean Discrepancy Gradient Flow
Michael Arbel
Anna Korba
Adil Salim
A. Gretton
24
158
0
11 Jun 2019
Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections
E. Weinan
Chao Ma
Qingcan Wang
Lei Wu
MLT
24
22
0
10 Apr 2019
Unbiased deep solvers for linear parametric PDEs
Marc Sabate Vidales
David Siska
Lukasz Szpruch
OOD
24
7
0
11 Oct 2018
1