ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10961
  4. Cited By
Fast Convergence of Natural Gradient Descent for Overparameterized
  Neural Networks

Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks

27 May 2019
Guodong Zhang
James Martens
Roger C. Grosse
    ODL
ArXivPDFHTML

Papers citing "Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks"

28 / 28 papers shown
Title
An Improved Empirical Fisher Approximation for Natural Gradient Descent
An Improved Empirical Fisher Approximation for Natural Gradient Descent
Xiaodong Wu
Wenyi Yu
Chao Zhang
Philip Woodland
27
3
0
10 Jun 2024
The Challenges of the Nonlinear Regime for Physics-Informed Neural
  Networks
The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks
Andrea Bonfanti
Giuseppe Bruno
Cristina Cipriani
26
7
0
06 Feb 2024
How to Protect Copyright Data in Optimization of Large Language Models?
How to Protect Copyright Data in Optimization of Large Language Models?
T. Chu
Zhao-quan Song
Chiwun Yang
34
29
0
23 Aug 2023
Efficient SGD Neural Network Training via Sublinear Activated Neuron
  Identification
Efficient SGD Neural Network Training via Sublinear Activated Neuron Identification
Lianke Qin
Zhao-quan Song
Yuanyuan Yang
20
9
0
13 Jul 2023
Gauss-Newton Temporal Difference Learning with Nonlinear Function
  Approximation
Gauss-Newton Temporal Difference Learning with Nonlinear Function Approximation
Zhifa Ke
Junyu Zhang
Zaiwen Wen
19
0
0
25 Feb 2023
Gradient Flows for Sampling: Mean-Field Models, Gaussian Approximations
  and Affine Invariance
Gradient Flows for Sampling: Mean-Field Models, Gaussian Approximations and Affine Invariance
Yifan Chen
Daniel Zhengyu Huang
Jiaoyang Huang
Sebastian Reich
Andrew M. Stuart
11
17
0
21 Feb 2023
The Geometry of Neural Nets' Parameter Spaces Under Reparametrization
The Geometry of Neural Nets' Parameter Spaces Under Reparametrization
Agustinus Kristiadi
Felix Dangel
Philipp Hennig
26
11
0
14 Feb 2023
Bypass Exponential Time Preprocessing: Fast Neural Network Training via
  Weight-Data Correlation Preprocessing
Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing
Josh Alman
Jiehao Liang
Zhao-quan Song
Ruizhe Zhang
Danyang Zhuo
71
31
0
25 Nov 2022
Component-Wise Natural Gradient Descent -- An Efficient Neural Network
  Optimization
Component-Wise Natural Gradient Descent -- An Efficient Neural Network Optimization
Tran van Sang
Mhd Irvan
R. Yamaguchi
Toshiyuki Nakata
11
1
0
11 Oct 2022
Scalable K-FAC Training for Deep Neural Networks with Distributed
  Preconditioning
Scalable K-FAC Training for Deep Neural Networks with Distributed Preconditioning
Lin Zhang
S. Shi
Wei Wang
Bo-wen Li
28
10
0
30 Jun 2022
Amortized Proximal Optimization
Amortized Proximal Optimization
Juhan Bae
Paul Vicol
Jeff Z. HaoChen
Roger C. Grosse
ODL
25
14
0
28 Feb 2022
A Geometric Understanding of Natural Gradient
A Geometric Understanding of Natural Gradient
Qinxun Bai
S. Rosenberg
Wei Xu
19
2
0
13 Feb 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order
  Optimization
Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
Frederik Benzing
ODL
37
23
0
28 Jan 2022
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic
  Time
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
Zhao-quan Song
Licheng Zhang
Ruizhe Zhang
23
63
0
14 Dec 2021
SCORE: Approximating Curvature Information under Self-Concordant
  Regularization
SCORE: Approximating Curvature Information under Self-Concordant Regularization
Adeyemi Damilare Adeoye
Alberto Bemporad
10
4
0
14 Dec 2021
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
34
6
0
13 Dec 2021
Does Preprocessing Help Training Over-parameterized Neural Networks?
Does Preprocessing Help Training Over-parameterized Neural Networks?
Zhao-quan Song
Shuo Yang
Ruizhe Zhang
27
49
0
09 Oct 2021
Improved architectures and training algorithms for deep operator
  networks
Improved architectures and training algorithms for deep operator networks
Sifan Wang
Hanwen Wang
P. Perdikaris
AI4CE
47
105
0
04 Oct 2021
Credit Assignment in Neural Networks through Deep Feedback Control
Credit Assignment in Neural Networks through Deep Feedback Control
Alexander Meulemans
Matilde Tristany Farinha
Javier García Ordónez
Pau Vilimelis Aceituno
João Sacramento
Benjamin Grewe
20
34
0
15 Jun 2021
Fractal Structure and Generalization Properties of Stochastic
  Optimization Algorithms
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
A. Camuto
George Deligiannidis
Murat A. Erdogdu
Mert Gurbuzbalaban
Umut cSimcsekli
Lingjiong Zhu
25
29
0
09 Jun 2021
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block
  Inversion
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion
Saeed Soori
Bugra Can
Baourun Mu
Mert Gurbuzbalaban
M. Dehnavi
16
10
0
07 Jun 2021
Whitening and second order optimization both make information in the
  dataset unusable during training, and can reduce or prevent generalization
Whitening and second order optimization both make information in the dataset unusable during training, and can reduce or prevent generalization
Neha S. Wadia
Daniel Duckworth
S. Schoenholz
Ethan Dyer
Jascha Narain Sohl-Dickstein
19
13
0
17 Aug 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
34
32
0
18 Jun 2020
Non-convergence of stochastic gradient descent in the training of deep
  neural networks
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide
  Random Network: A Geometrical Perspective
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective
S. Amari
11
12
0
20 Jan 2020
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for
  Regression Problems
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
Tianle Cai
Ruiqi Gao
Jikai Hou
Siyu Chen
Dong Wang
Di He
Zhihua Zhang
Liwei Wang
ODL
13
57
0
28 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural
  Networks on Classification Problems
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
10
33
0
23 May 2019
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1