ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02353
  4. Cited By
Efficient Subsampled Gauss-Newton and Natural Gradient Methods for
  Training Neural Networks

Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks

5 June 2019
Yi Ren
Shiqian Ma
ArXiv (abs)PDFHTML

Papers citing "Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks"

26 / 26 papers shown
Title
Fast Convergence Rates for Subsampled Natural Gradient Algorithms on Quadratic Model Problems
Fast Convergence Rates for Subsampled Natural Gradient Algorithms on Quadratic Model Problems
Gil Goldshlager
Jiang Hu
Lin Lin
108
0
0
28 Aug 2025
Position: Curvature Matrices Should Be Democratized via Linear Operators
Position: Curvature Matrices Should Be Democratized via Linear Operators
Felix Dangel
Runa Eschenhagen
Weronika Ormaniec
Andres Fernandez
Lukas Tatzel
Agustinus Kristiadi
347
5
0
31 Jan 2025
A Riemannian Optimization Perspective of the Gauss-Newton Method for Feedforward Neural Networks
A Riemannian Optimization Perspective of the Gauss-Newton Method for Feedforward Neural Networks
Semih Cayci
287
0
0
18 Dec 2024
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Theoretical characterisation of the Gauss-Newton conditioning in Neural NetworksNeural Information Processing Systems (NeurIPS), 2024
Jim Zhao
Sidak Pal Singh
Aurelien Lucchi
AI4CE
403
1
0
04 Nov 2024
Incremental Gauss-Newton Descent for Machine Learning
Incremental Gauss-Newton Descent for Machine Learning
Mikalai Korbit
Mario Zanon
ODL
130
0
0
10 Aug 2024
An Improved Empirical Fisher Approximation for Natural Gradient Descent
An Improved Empirical Fisher Approximation for Natural Gradient Descent
Xiaodong Wu
Wenyi Yu
Chao Zhang
Philip Woodland
193
10
0
10 Jun 2024
Ground state phases of the two-dimension electron gas with a unified
  variational approach
Ground state phases of the two-dimension electron gas with a unified variational approach
Conor Smith
Yixiao Chen
Ryan Levy
Yubo Yang
Miguel A. Morales
Shiwei Zhang
OT
149
6
0
29 May 2024
Exact Gauss-Newton Optimization for Training Deep Neural Networks
Exact Gauss-Newton Optimization for Training Deep Neural Networks
Mikalai Korbit
Adeyemi Damilare Adeoye
Alberto Bemporad
Mario Zanon
ODL
237
3
0
23 May 2024
Thermodynamic Natural Gradient Descent
Thermodynamic Natural Gradient Descent
Kaelan Donatella
Samuel Duffield
Maxwell Aifer
Denis Melanson
Gavin Crooks
Patrick J. Coles
101
4
0
22 May 2024
Regularized Gauss-Newton for Optimizing Overparameterized Neural
  Networks
Regularized Gauss-Newton for Optimizing Overparameterized Neural Networks
Adeyemi Damilare Adeoye
Philipp Christian Petersen
Alberto Bemporad
177
1
0
23 Apr 2024
Inverse-Free Fast Natural Gradient Descent Method for Deep Learning
Inverse-Free Fast Natural Gradient Descent Method for Deep Learning
Xinwei Ou
Ce Zhu
Xiaolin Huang
Yipeng Liu
ODL
227
0
0
06 Mar 2024
A Kaczmarz-inspired approach to accelerate the optimization of neural
  network wavefunctions
A Kaczmarz-inspired approach to accelerate the optimization of neural network wavefunctions
Gil Goldshlager
Nilin Abrahamsen
Lin Lin
185
28
0
18 Jan 2024
Optimising Distributions with Natural Gradient Surrogates
Optimising Distributions with Natural Gradient SurrogatesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Jonathan So
Richard Turner
145
1
0
18 Oct 2023
Dual Gauss-Newton Directions for Deep Learning
Dual Gauss-Newton Directions for Deep Learning
Vincent Roulet
Mathieu Blondel
ODL
127
0
0
17 Aug 2023
MKOR: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1
  Updates
MKOR: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 UpdatesNeural Information Processing Systems (NeurIPS), 2023
Mohammad Mozaffari
Sikan Li
Zhao Zhang
M. Dehnavi
198
5
0
02 Jun 2023
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch
Kazuki Osawa
Satoki Ishikawa
Rio Yokota
Shigang Li
Torsten Hoefler
ODL
134
19
0
08 May 2023
Achieving High Accuracy with PINNs via Energy Natural Gradients
Achieving High Accuracy with PINNs via Energy Natural GradientsInternational Conference on Machine Learning (ICML), 2023
Johannes Müller
Marius Zeinhofer
252
9
0
25 Feb 2023
Rethinking Gauss-Newton for learning over-parameterized models
Rethinking Gauss-Newton for learning over-parameterized modelsNeural Information Processing Systems (NeurIPS), 2023
Michael Arbel
Romain Menegaux
Pierre Wolinski
AI4CE
377
7
0
06 Feb 2023
Improving Levenberg-Marquardt Algorithm for Neural Networks
Improving Levenberg-Marquardt Algorithm for Neural Networks
Omead Brandon Pooladzandi
Yiming Zhou
ODL
122
6
0
17 Dec 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order
  Optimization
Gradient Descent on Neurons and its Link to Approximate Second-Order OptimizationInternational Conference on Machine Learning (ICML), 2022
Frederik Benzing
ODL
252
29
0
28 Jan 2022
Regularized Newton Method with Global $O(1/k^2)$ Convergence
Regularized Newton Method with Global O(1/k2)O(1/k^2)O(1/k2) Convergence
Konstantin Mishchenko
287
50
0
03 Dec 2021
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block
  Inversion
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion
Saeed Soori
Bugra Can
Baourun Mu
Mert Gurbuzbalaban
M. Dehnavi
357
10
0
07 Jun 2021
Research of Damped Newton Stochastic Gradient Descent Method for Neural
  Network Training
Research of Damped Newton Stochastic Gradient Descent Method for Neural Network Training
Jingcheng Zhou
Wei Wei
Zhiming Zheng
ODL
60
0
0
31 Mar 2021
Kronecker-factored Quasi-Newton Methods for Deep Learning
Kronecker-factored Quasi-Newton Methods for Deep Learning
Yi Ren
Achraf Bahamou
Shiqian Ma
ODL
161
4
0
12 Feb 2021
Sketchy Empirical Natural Gradient Methods for Deep Learning
Sketchy Empirical Natural Gradient Methods for Deep Learning
Minghan Yang
Dong Xu
Zaiwen Wen
Mengyun Chen
Pengxiang Xu
208
15
0
10 Jun 2020
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for
  Regression Problems
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
Tianle Cai
Ruiqi Gao
Jikai Hou
Siyu Chen
Dong Wang
Di He
Zhihua Zhang
Liwei Wang
ODL
156
60
0
28 May 2019
1