ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.10713
  4. Cited By
Towards a Mathematical Understanding of Neural Network-Based Machine
  Learning: what we know and what we don't

Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't

22 September 2020
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
    AI4CE
ArXivPDFHTML

Papers citing "Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't"

29 / 29 papers shown
Title
Sharp higher order convergence rates for the Adam optimizer
Sharp higher order convergence rates for the Adam optimizer
Steffen Dereich
Arnulf Jentzen
Adrian Riekert
ODL
61
0
0
28 Apr 2025
Robust Concept Erasure Using Task Vectors
Robust Concept Erasure Using Task Vectors
Minh Pham
Kelly O. Marshall
Chinmay Hegde
Niv Cohen
117
17
0
21 Feb 2025
High-dimensional classification problems with Barron regular boundaries under margin conditions
High-dimensional classification problems with Barron regular boundaries under margin conditions
Jonathan García
Philipp Petersen
74
0
0
10 Dec 2024
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
32
1
0
13 Sep 2023
Geometry and Local Recovery of Global Minima of Two-layer Neural Networks at Overparameterization
Geometry and Local Recovery of Global Minima of Two-layer Neural Networks at Overparameterization
Leyang Zhang
Yaoyu Zhang
Tao Luo
18
2
0
01 Sep 2023
Embeddings between Barron spaces with higher order activation functions
Embeddings between Barron spaces with higher order activation functions
T. J. Heeringa
L. Spek
Felix L. Schwenninger
C. Brune
19
3
0
25 May 2023
Reinforcement Learning with Function Approximation: From Linear to
  Nonlinear
Reinforcement Learning with Function Approximation: From Linear to Nonlinear
Jihao Long
Jiequn Han
19
5
0
20 Feb 2023
Infinite-width limit of deep linear neural networks
Infinite-width limit of deep linear neural networks
Lénaïc Chizat
Maria Colombo
Xavier Fernández-Real
Alessio Figalli
31
14
0
29 Nov 2022
To be or not to be stable, that is the question: understanding neural
  networks for inverse problems
To be or not to be stable, that is the question: understanding neural networks for inverse problems
David Evangelista
J. Nagy
E. Morotti
E. L. Piccolomini
23
4
0
24 Nov 2022
Duality for Neural Networks through Reproducing Kernel Banach Spaces
Duality for Neural Networks through Reproducing Kernel Banach Spaces
L. Spek
T. J. Heeringa
Felix L. Schwenninger
C. Brune
11
13
0
09 Nov 2022
Asymptotic-Preserving Neural Networks for hyperbolic systems with
  diffusive scaling
Asymptotic-Preserving Neural Networks for hyperbolic systems with diffusive scaling
Giulia Bertaglia
AI4CE
10
5
0
17 Oct 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
46
6
0
17 Sep 2022
SRMD: Sparse Random Mode Decomposition
SRMD: Sparse Random Mode Decomposition
Nicholas Richardson
Hayden Schaeffer
Giang Tran
19
11
0
12 Apr 2022
HARFE: Hard-Ridge Random Feature Expansion
HARFE: Hard-Ridge Random Feature Expansion
Esha Saha
Hayden Schaeffer
Giang Tran
38
14
0
06 Feb 2022
Optimal learning of high-dimensional classification problems using deep
  neural networks
Optimal learning of high-dimensional classification problems using deep neural networks
P. Petersen
F. Voigtlaender
25
9
0
23 Dec 2021
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
34
6
0
13 Dec 2021
Embedding Principle: a hierarchical structure of loss landscape of deep
  neural networks
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Yaoyu Zhang
Yuqing Li
Zhongwang Zhang
Tao Luo
Z. Xu
21
21
0
30 Nov 2021
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
35
12
0
21 Oct 2021
On the Global Convergence of Gradient Descent for multi-layer ResNets in
  the mean-field regime
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
MLT
AI4CE
30
11
0
06 Oct 2021
Combining machine learning and data assimilation to forecast dynamical
  systems from noisy partial observations
Combining machine learning and data assimilation to forecast dynamical systems from noisy partial observations
Georg Gottwald
Sebastian Reich
AI4CE
40
37
0
08 Aug 2021
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Arnulf Jentzen
Adrian Riekert
19
23
0
09 Jul 2021
Generalization Error of GAN from the Discriminator's Perspective
Generalization Error of GAN from the Discriminator's Perspective
Hongkang Yang
Weinan E
GAN
38
13
0
08 Jul 2021
A Priori Generalization Error Analysis of Two-Layer Neural Networks for
  Solving High Dimensional Schrödinger Eigenvalue Problems
A Priori Generalization Error Analysis of Two-Layer Neural Networks for Solving High Dimensional Schrödinger Eigenvalue Problems
Jianfeng Lu
Yulong Lu
29
29
0
04 May 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
46
89
0
25 Feb 2021
A Priori Generalization Analysis of the Deep Ritz Method for Solving
  High Dimensional Elliptic Equations
A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Equations
Jianfeng Lu
Yulong Lu
Min Wang
23
37
0
05 Jan 2021
Cutting-edge 3D Medical Image Segmentation Methods in 2020: Are Happy
  Families All Alike?
Cutting-edge 3D Medical Image Segmentation Methods in 2020: Are Happy Families All Alike?
Jun Ma
64
25
0
01 Jan 2021
Machine Learning and Computational Mathematics
Machine Learning and Computational Mathematics
Weinan E
PINN
AI4CE
18
61
0
23 Sep 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1