ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.13530
  4. Cited By
On the Convergence of Gradient Descent Training for Two-layer
  ReLU-networks in the Mean Field Regime

On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime

27 May 2020
Stephan Wojtowytsch
    MLT
ArXivPDFHTML

Papers citing "On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime"

17 / 17 papers shown
Title
Understanding the training of infinitely deep and wide ResNets with
  Conditional Optimal Transport
Understanding the training of infinitely deep and wide ResNets with Conditional Optimal Transport
Raphael Barboni
Gabriel Peyré
Franccois-Xavier Vialard
32
3
0
19 Mar 2024
Learning a Sparse Representation of Barron Functions with the Inverse
  Scale Space Flow
Learning a Sparse Representation of Barron Functions with the Inverse Scale Space Flow
T. J. Heeringa
Tim Roith
Christoph Brune
Martin Burger
11
0
0
05 Dec 2023
Global Optimality of Elman-type RNN in the Mean-Field Regime
Global Optimality of Elman-type RNN in the Mean-Field Regime
Andrea Agazzi
Jian-Xiong Lu
Sayan Mukherjee
MLT
26
1
0
12 Mar 2023
On adversarial robustness and the use of Wasserstein ascent-descent
  dynamics to enforce it
On adversarial robustness and the use of Wasserstein ascent-descent dynamics to enforce it
Camilo A. Garcia Trillos
Nicolas García Trillos
16
5
0
09 Jan 2023
Infinite-width limit of deep linear neural networks
Infinite-width limit of deep linear neural networks
Lénaïc Chizat
Maria Colombo
Xavier Fernández-Real
Alessio Figalli
31
14
0
29 Nov 2022
Normalized gradient flow optimization in the training of ReLU artificial
  neural networks
Normalized gradient flow optimization in the training of ReLU artificial neural networks
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
29
0
0
13 Jul 2022
Gradient flow dynamics of shallow ReLU networks for square loss and
  orthogonal inputs
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
ODL
19
58
0
02 Jun 2022
A blob method for inhomogeneous diffusion with applications to
  multi-agent control and sampling
A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling
Katy Craig
Karthik Elamvazhuthi
M. Haberland
O. Turanova
29
15
0
25 Feb 2022
On the Global Convergence of Gradient Descent for multi-layer ResNets in
  the mean-field regime
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
MLT
AI4CE
30
11
0
06 Oct 2021
Generalization Error of GAN from the Discriminator's Perspective
Generalization Error of GAN from the Discriminator's Perspective
Hongkang Yang
Weinan E
GAN
38
13
0
08 Jul 2021
Global Convergence of Three-layer Neural Networks in the Mean Field
  Regime
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
41
19
0
11 May 2021
Landscape analysis for shallow neural networks: complete classification
  of critical points for affine target functions
Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
24
10
0
19 Mar 2021
On the emergence of simplex symmetry in the final and penultimate layers
  of neural network classifiers
On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers
E. Weinan
Stephan Wojtowytsch
28
42
0
10 Dec 2020
Global optimality of softmax policy gradient with single hidden layer
  neural networks in the mean-field regime
Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime
Andrea Agazzi
Jianfeng Lu
13
15
0
22 Oct 2020
Representation formulas and pointwise properties for Barron functions
Representation formulas and pointwise properties for Barron functions
E. Weinan
Stephan Wojtowytsch
20
79
0
10 Jun 2020
Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean
  field training perspective
Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean field training perspective
Stephan Wojtowytsch
E. Weinan
MLT
21
48
0
21 May 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
16
327
0
11 Feb 2020
1