Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2304.09221
Cited By
v1
v2 (latest)
Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
18 April 2023
Jing An
Jianfeng Lu
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks"
5 / 5 papers shown
Title
ODE approximation for the Adam algorithm: General and overparametrized setting
Steffen Dereich
Arnulf Jentzen
Sebastian Kassing
61
0
0
06 Nov 2025
Convergence of Stochastic Gradient Methods for Wide Two-Layer Physics-Informed Neural Networks
Bangti Jin
Longjun Wu
98
1
0
29 Aug 2025
From Sublinear to Linear: Fast Convergence in Deep Networks via Locally Polyak-Lojasiewicz Regions
Agnideep Aich
Ashit Aich
Bruce Wade
120
1
0
29 Jul 2025
Convergence of continuous-time stochastic gradient descent with applications to deep neural networks
Gabor Lugosi
Eulalia Nualart
233
1
0
11 Sep 2024
Generalization Performance of Empirical Risk Minimization on Over-parameterized Deep ReLU Nets
IEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2021
Shao-Bo Lin
Yao Wang
Ding-Xuan Zhou
ODL
292
5
0
28 Nov 2021
1