Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.12297
Cited By
v1
v2 (latest)
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
22 June 2020
Atsushi Nitanda
Taiji Suzuki
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime"
17 / 17 papers shown
Title
Observation Noise and Initialization in Wide Neural Networks
Sergio Calvo-Ordoñez
Jonathan Plenk
Richard Bergna
Alvaro Cartea
Jose Miguel Hernandez-Lobato
Konstantina Palla
Kamil Ciosek
118
1
0
03 Feb 2025
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
143
2
0
08 Jul 2024
Random feature approximation for general spectral methods
Mike Nguyen
Nicole Mücke
58
1
0
29 Aug 2023
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
Yiping Lu
Jose H. Blanchet
Lexing Ying
105
8
0
15 May 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
101
8
0
02 Feb 2022
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wei Cao
Zhenguo Li
UQCV
AAML
92
19
0
07 Dec 2021
Learning curves for Gaussian process regression with power-law priors and targets
Hui Jin
P. Banerjee
Guido Montúfar
70
18
0
23 Oct 2021
On the Double Descent of Random Features Models Trained with SGD
Fanghui Liu
Johan A. K. Suykens
Volkan Cevher
MLT
101
10
0
13 Oct 2021
A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your Pre-training Effective?
Hiroaki Mikami
Kenji Fukumizu
Shogo Murai
Shuji Suzuki
Yuta Kikuchi
Taiji Suzuki
S. Maeda
Kohei Hayashi
92
12
0
25 Aug 2021
Deep Networks Provably Classify Data on Curves
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
83
9
0
29 Jul 2021
Neural Optimization Kernel: Towards Robust Deep Learning
Yueming Lyu
Ivor Tsang
56
1
0
11 Jun 2021
Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth
Keyulu Xu
Mozhi Zhang
Stefanie Jegelka
Kenji Kawaguchi
GNN
53
78
0
10 May 2021
Universal scaling laws in the gradient descent training of neural networks
Maksim Velikanov
Dmitry Yarotsky
78
9
0
02 May 2021
Spectral Analysis of the Neural Tangent Kernel for Deep Residual Networks
Yuval Belfer
Amnon Geifman
Meirav Galun
Ronen Basri
74
17
0
07 Apr 2021
Particle Dual Averaging: Optimization of Mean Field Neural Networks with Global Convergence Rate Analysis
Atsushi Nitanda
Denny Wu
Taiji Suzuki
86
29
0
31 Dec 2020
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Keyulu Xu
Mozhi Zhang
Jingling Li
S. Du
Ken-ichi Kawarabayashi
Stefanie Jegelka
MLT
184
313
0
24 Sep 2020
Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network
Tianyang Hu
Wei Cao
Cong Lin
Guang Cheng
118
52
0
06 Jul 2020
1