ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.08095
  4. Cited By
Small steps and giant leaps: Minimal Newton solvers for Deep Learning

Small steps and giant leaps: Minimal Newton solvers for Deep Learning

21 May 2018
João F. Henriques
Sébastien Ehrhardt
Samuel Albanie
Andrea Vedaldi
    ODL
ArXiv (abs)PDFHTML

Papers citing "Small steps and giant leaps: Minimal Newton solvers for Deep Learning"

13 / 13 papers shown
Gradient Descent with Provably Tuned Learning-rate Schedules
Gradient Descent with Provably Tuned Learning-rate Schedules
Dravyansh Sharma
227
0
0
04 Dec 2025
Dual Gauss-Newton Directions for Deep Learning
Dual Gauss-Newton Directions for Deep Learning
Vincent Roulet
Mathieu Blondel
ODL
218
0
0
17 Aug 2023
FOSI: Hybrid First and Second Order Optimization
FOSI: Hybrid First and Second Order OptimizationInternational Conference on Learning Representations (ICLR), 2023
Hadar Sivan
Moshe Gabel
Assaf Schuster
ODL
435
2
0
16 Feb 2023
Statistical and Computational Guarantees for Influence Diagnostics
Statistical and Computational Guarantees for Influence Diagnostics
Jillian R. Fisher
Lang Liu
Krishna Pillutla
Y. Choi
Zaïd Harchaoui
TDI
284
0
0
08 Dec 2022
A Stochastic Bundle Method for Interpolating Networks
A Stochastic Bundle Method for Interpolating Networks
Alasdair Paren
Leonard Berrada
Rudra P. K. Poudel
M. P. Kumar
248
6
0
29 Jan 2022
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
A. Davtyan
Sepehr Sameni
L. Cerkezi
Givi Meishvili
Adam Bielski
Paolo Favaro
ODL
464
5
0
07 Jul 2021
AutoSimulate: (Quickly) Learning Synthetic Data Generation
AutoSimulate: (Quickly) Learning Synthetic Data Generation
Harkirat Singh Behl
A. G. Baydin
Ran Gal
Juil Sock
Vibhav Vineet
311
25
0
16 Aug 2020
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
916
195
0
03 Jul 2020
Enhance Curvature Information by Structured Stochastic Quasi-Newton
  Methods
Enhance Curvature Information by Structured Stochastic Quasi-Newton Methods
Minghan Yang
Dong Xu
Yongfeng Li
Zaiwen Wen
Mengyun Chen
ODL
238
3
0
17 Jun 2020
Sketchy Empirical Natural Gradient Methods for Deep Learning
Sketchy Empirical Natural Gradient Methods for Deep Learning
Minghan Yang
Dong Xu
Zaiwen Wen
Mengyun Chen
Pengxiang Xu
302
15
0
10 Jun 2020
Deep Neural Network Learning with Second-Order Optimizers -- a Practical
  Study with a Stochastic Quasi-Gauss-Newton Method
Deep Neural Network Learning with Second-Order Optimizers -- a Practical Study with a Stochastic Quasi-Gauss-Newton Method
C. Thiele
Mauricio Araya-Polo
D. Hohl
ODL
217
2
0
06 Apr 2020
Training Neural Networks for and by Interpolation
Training Neural Networks for and by InterpolationInternational Conference on Machine Learning (ICML), 2019
Leonard Berrada
Andrew Zisserman
M. P. Kumar
3DH
271
71
0
13 Jun 2019
An Adaptive Remote Stochastic Gradient Method for Training Neural
  Networks
An Adaptive Remote Stochastic Gradient Method for Training Neural Networks
Yushu Chen
Hao Jing
Wenlai Zhao
Zhiqiang Liu
Haohuan Fu
Lián Qiao
Wei Xue
Guangwen Yang
ODL
620
2
0
04 May 2019
1
Page 1 of 1