ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.01375
  4. Cited By
Analysis of a Two-Layer Neural Network via Displacement Convexity

Analysis of a Two-Layer Neural Network via Displacement Convexity

5 January 2019
Adel Javanmard
Marco Mondelli
Andrea Montanari
    MLT
ArXivPDFHTML

Papers citing "Analysis of a Two-Layer Neural Network via Displacement Convexity"

47 / 47 papers shown
Title
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
97
0
0
06 May 2025
Kernel Approximation of Fisher-Rao Gradient Flows
Kernel Approximation of Fisher-Rao Gradient Flows
Jia Jie Zhu
Alexander Mielke
44
5
0
27 Oct 2024
Partially Observed Trajectory Inference using Optimal Transport and a Dynamics Prior
Partially Observed Trajectory Inference using Optimal Transport and a Dynamics Prior
Anming Gu
Edward Chien
Kristjan Greenewald
44
4
0
11 Jun 2024
Understanding the training of infinitely deep and wide ResNets with
  Conditional Optimal Transport
Understanding the training of infinitely deep and wide ResNets with Conditional Optimal Transport
Raphael Barboni
Gabriel Peyré
Franccois-Xavier Vialard
32
3
0
19 Mar 2024
Towards Understanding the Word Sensitivity of Attention Layers: A Study
  via Random Features
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features
Simone Bombari
Marco Mondelli
31
3
0
05 Feb 2024
Fundamental limits of overparametrized shallow neural networks for
  supervised learning
Fundamental limits of overparametrized shallow neural networks for supervised learning
Francesco Camilli
D. Tieplova
Jean Barbier
22
9
0
11 Jul 2023
Convergence of mean-field Langevin dynamics: Time and space
  discretization, stochastic gradient, and variance reduction
Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction
Taiji Suzuki
Denny Wu
Atsushi Nitanda
27
16
0
12 Jun 2023
Doubly Regularized Entropic Wasserstein Barycenters
Doubly Regularized Entropic Wasserstein Barycenters
Lénaïc Chizat
13
11
0
21 Mar 2023
Generalization and Stability of Interpolating Neural Networks with
  Minimal Width
Generalization and Stability of Interpolating Neural Networks with Minimal Width
Hossein Taheri
Christos Thrampoulidis
24
15
0
18 Feb 2023
Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic
  Gradient Descent
Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic Gradient Descent
Benjamin Gess
Sebastian Kassing
Vitalii Konarovskyi
DiffM
24
6
0
14 Feb 2023
Efficient displacement convex optimization with particle gradient
  descent
Efficient displacement convex optimization with particle gradient descent
Hadi Daneshmand
J. Lee
Chi Jin
21
5
0
09 Feb 2023
Birth-death dynamics for sampling: Global convergence, approximations
  and their asymptotics
Birth-death dynamics for sampling: Global convergence, approximations and their asymptotics
Yulong Lu
D. Slepčev
Lihan Wang
32
22
0
01 Nov 2022
A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
  Neural Networks
A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer Neural Networks
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
10
5
0
28 Oct 2022
Mean-field analysis for heavy ball methods: Dropout-stability,
  connectivity, and global convergence
Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence
Diyuan Wu
Vyacheslav Kungurtsev
Marco Mondelli
15
3
0
13 Oct 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
17
112
0
30 Jun 2022
Sharp asymptotics on the compression of two-layer neural networks
Sharp asymptotics on the compression of two-layer neural networks
Mohammad Hossein Amani
Simone Bombari
Marco Mondelli
Rattana Pukdee
Stefano Rini
MLT
17
0
0
17 May 2022
Trajectory Inference via Mean-field Langevin in Path Space
Trajectory Inference via Mean-field Langevin in Path Space
Lénaïc Chizat
Stephen X. Zhang
Matthieu Heitz
Geoffrey Schiebinger
20
20
0
14 May 2022
On Feature Learning in Neural Networks with Global Convergence
  Guarantees
On Feature Learning in Neural Networks with Global Convergence Guarantees
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
20
12
0
22 Apr 2022
A blob method for inhomogeneous diffusion with applications to
  multi-agent control and sampling
A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling
Katy Craig
Karthik Elamvazhuthi
M. Haberland
O. Turanova
25
15
0
25 Feb 2022
Convex Analysis of the Mean Field Langevin Dynamics
Convex Analysis of the Mean Field Langevin Dynamics
Atsushi Nitanda
Denny Wu
Taiji Suzuki
MLT
59
64
0
25 Jan 2022
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of
  Representation Learning in Actor-Critic
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic
Yufeng Zhang
Siyu Chen
Zhuoran Yang
Michael I. Jordan
Zhaoran Wang
58
4
0
27 Dec 2021
Global convergence of ResNets: From finite to infinite width using
  linear parameterization
Global convergence of ResNets: From finite to infinite width using linear parameterization
Raphael Barboni
Gabriel Peyré
Franccois-Xavier Vialard
11
12
0
10 Dec 2021
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
A. Shevchenko
Vyacheslav Kungurtsev
Marco Mondelli
MLT
30
13
0
03 Nov 2021
Limiting fluctuation and trajectorial stability of multilayer neural
  networks with mean field training
Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training
H. Pham
Phan-Minh Nguyen
6
6
0
29 Oct 2021
Convergence rates for shallow neural networks learned by gradient
  descent
Convergence rates for shallow neural networks learned by gradient descent
Alina Braun
Michael Kohler
S. Langer
Harro Walk
20
10
0
20 Jul 2021
Small random initialization is akin to spectral learning: Optimization
  and generalization guarantees for overparameterized low-rank matrix
  reconstruction
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
Dominik Stöger
Mahdi Soltanolkotabi
ODL
31
74
0
28 Jun 2021
Stochastic gradient descent with noise of machine learning type. Part
  II: Continuous time analysis
Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysis
Stephan Wojtowytsch
23
33
0
04 Jun 2021
Global Convergence of Three-layer Neural Networks in the Mean Field
  Regime
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
38
19
0
11 May 2021
Particle Dual Averaging: Optimization of Mean Field Neural Networks with
  Global Convergence Rate Analysis
Particle Dual Averaging: Optimization of Mean Field Neural Networks with Global Convergence Rate Analysis
Atsushi Nitanda
Denny Wu
Taiji Suzuki
6
29
0
31 Dec 2020
Dataset Dynamics via Gradient Flows in Probability Space
Dataset Dynamics via Gradient Flows in Probability Space
David Alvarez-Melis
Nicolò Fusi
13
18
0
24 Oct 2020
Global optimality of softmax policy gradient with single hidden layer
  neural networks in the mean-field regime
Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime
Andrea Agazzi
Jianfeng Lu
13
15
0
22 Oct 2020
Towards a Mathematical Understanding of Neural Network-Based Machine
  Learning: what we know and what we don't
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
AI4CE
6
133
0
22 Sep 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
16
25
0
13 Jul 2020
On the Empirical Neural Tangent Kernel of Standard Finite-Width
  Convolutional Neural Network Architectures
On the Empirical Neural Tangent Kernel of Standard Finite-Width Convolutional Neural Network Architectures
M. Samarin
Volker Roth
David Belius
6
3
0
24 Jun 2020
A Mean-Field Theory for Learning the Schönberg Measure of Radial
  Basis Functions
A Mean-Field Theory for Learning the Schönberg Measure of Radial Basis Functions
M. B. Khuzani
Yinyu Ye
S. Napel
Lei Xing
11
1
0
23 Jun 2020
Can Temporal-Difference and Q-Learning Learn Representation? A
  Mean-Field Theory
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Yufeng Zhang
Qi Cai
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
OOD
MLT
58
11
0
08 Jun 2020
Consensus-Based Optimization on the Sphere: Convergence to Global
  Minimizers and Machine Learning
Consensus-Based Optimization on the Sphere: Convergence to Global Minimizers and Machine Learning
M. Fornasier
Hui Huang
L. Pareschi
Philippe Sünnen
6
67
0
31 Jan 2020
A Rigorous Framework for the Mean Field Limit of Multilayer Neural
  Networks
A Rigorous Framework for the Mean Field Limit of Multilayer Neural Networks
Phan-Minh Nguyen
H. Pham
AI4CE
11
81
0
30 Jan 2020
Avoiding Spurious Local Minima in Deep Quadratic Networks
Avoiding Spurious Local Minima in Deep Quadratic Networks
A. Kazemipour
Brett W. Larsen
S. Druckmann
ODL
11
6
0
31 Dec 2019
Landscape Connectivity and Dropout Stability of SGD Solutions for
  Over-parameterized Neural Networks
Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks
A. Shevchenko
Marco Mondelli
19
37
0
20 Dec 2019
A Mean-Field Theory for Kernel Alignment with Random Features in
  Generative and Discriminative Models
A Mean-Field Theory for Kernel Alignment with Random Features in Generative and Discriminative Models
M. B. Khuzani
Liyue Shen
Shahin Shahrampour
Lei Xing
16
1
0
25 Sep 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
39
624
0
14 Aug 2019
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Lénaïc Chizat
13
92
0
24 Jul 2019
Mean-Field Langevin Dynamics and Energy Landscape of Neural Networks
Mean-Field Langevin Dynamics and Energy Landscape of Neural Networks
Kaitong Hu
Zhenjie Ren
David Siska
Lukasz Szpruch
MLT
17
104
0
19 May 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
25
136
0
10 Apr 2019
Mean-field theory of two-layers neural networks: dimension-free bounds
  and kernel limit
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
11
275
0
16 Feb 2019
Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks
Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks
Phan-Minh Nguyen
AI4CE
14
72
0
07 Feb 2019
1