ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.01987
  4. Cited By
Function approximation by neural nets in the mean-field regime: Entropic
  regularization and controlled McKean-Vlasov dynamics
v1v2v3v4 (latest)

Function approximation by neural nets in the mean-field regime: Entropic regularization and controlled McKean-Vlasov dynamics

5 February 2020
Belinda Tzen
Maxim Raginsky
ArXiv (abs)PDFHTML

Papers citing "Function approximation by neural nets in the mean-field regime: Entropic regularization and controlled McKean-Vlasov dynamics"

12 / 12 papers shown
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
Pulkit Gopalani
Samyak Jha
Anirbit Mukherjee
315
3
0
17 Sep 2023
Unveiling Invariances via Neural Network Pruning
Unveiling Invariances via Neural Network Pruning
Derek Xu
Luke Huan
Wei Wang
277
0
0
15 Sep 2023
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student
  Settings and its Superiority to Kernel Methods
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel MethodsInternational Conference on Learning Representations (ICLR), 2022
Shunta Akiyama
Taiji Suzuki
270
9
0
30 May 2022
Gradient flows on graphons: existence, convergence, continuity equations
Gradient flows on graphons: existence, convergence, continuity equations
Sewoong Oh
Soumik Pal
Raghav Somani
Raghavendra Tripathi
268
5
0
18 Nov 2021
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks
  in Teacher-Student Setting
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student SettingInternational Conference on Machine Learning (ICML), 2021
Shunta Akiyama
Taiji Suzuki
MLT
318
16
0
11 Jun 2021
Non-asymptotic approximations of neural networks by Gaussian processes
Non-asymptotic approximations of neural networks by Gaussian processesAnnual Conference Computational Learning Theory (COLT), 2021
Ronen Eldan
Dan Mikulincer
T. Schramm
271
25
0
17 Feb 2021
Mathematical Models of Overparameterized Neural Networks
Mathematical Models of Overparameterized Neural NetworksProceedings of the IEEE (Proc. IEEE), 2020
Cong Fang
Hanze Dong
Tong Zhang
313
25
0
27 Dec 2020
Efficient constrained sampling via the mirror-Langevin algorithm
Efficient constrained sampling via the mirror-Langevin algorithmNeural Information Processing Systems (NeurIPS), 2020
Kwangjun Ahn
Sinho Chewi
411
64
0
30 Oct 2020
Generalization bound of globally optimal non-convex neural network
  training: Transportation map estimation by infinite dimensional Langevin
  dynamics
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamicsNeural Information Processing Systems (NeurIPS), 2020
Taiji Suzuki
242
23
0
11 Jul 2020
Predicting the outputs of finite deep neural networks trained with noisy
  gradients
Predicting the outputs of finite deep neural networks trained with noisy gradientsPhysical Review E (PRE), 2020
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Zohar Ringel
479
31
0
02 Apr 2020
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural
  Networks
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
Zixiang Chen
Yuan Cao
Quanquan Gu
Tong Zhang
MLT
285
10
0
10 Feb 2020
Mean-Field Neural ODEs via Relaxed Optimal Control
Mean-Field Neural ODEs via Relaxed Optimal Control
Jean-François Jabir
D. vSivska
Lukasz Szpruch
MLT
336
37
0
11 Dec 2019
1
Page 1 of 1