ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.08095
  4. Cited By
Small steps and giant leaps: Minimal Newton solvers for Deep Learning

Small steps and giant leaps: Minimal Newton solvers for Deep Learning

21 May 2018
João F. Henriques
Sébastien Ehrhardt
Samuel Albanie
Andrea Vedaldi
    ODL
ArXiv (abs)PDFHTML

Papers citing "Small steps and giant leaps: Minimal Newton solvers for Deep Learning"

8 / 8 papers shown
Title
Statistical and Computational Guarantees for Influence Diagnostics
Statistical and Computational Guarantees for Influence Diagnostics
Jillian R. Fisher
Lang Liu
Krishna Pillutla
Y. Choi
Zaïd Harchaoui
TDI
54
0
0
08 Dec 2022
A Stochastic Bundle Method for Interpolating Networks
A Stochastic Bundle Method for Interpolating Networks
Alasdair Paren
Leonard Berrada
Rudra P. K. Poudel
M. P. Kumar
64
4
0
29 Jan 2022
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
A. Davtyan
Sepehr Sameni
L. Cerkezi
Givi Meishvili
Adam Bielski
Paolo Favaro
ODL
158
3
0
07 Jul 2021
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
202
168
0
03 Jul 2020
Sketchy Empirical Natural Gradient Methods for Deep Learning
Sketchy Empirical Natural Gradient Methods for Deep Learning
Minghan Yang
Dong Xu
Zaiwen Wen
Mengyun Chen
Pengxiang Xu
27
13
0
10 Jun 2020
Deep Neural Network Learning with Second-Order Optimizers -- a Practical
  Study with a Stochastic Quasi-Gauss-Newton Method
Deep Neural Network Learning with Second-Order Optimizers -- a Practical Study with a Stochastic Quasi-Gauss-Newton Method
C. Thiele
Mauricio Araya-Polo
D. Hohl
ODL
21
2
0
06 Apr 2020
Training Neural Networks for and by Interpolation
Training Neural Networks for and by Interpolation
Leonard Berrada
Andrew Zisserman
M. P. Kumar
3DH
74
63
0
13 Jun 2019
An Adaptive Remote Stochastic Gradient Method for Training Neural
  Networks
An Adaptive Remote Stochastic Gradient Method for Training Neural Networks
Yushu Chen
Hao Jing
Wenlai Zhao
Zhiqiang Liu
Haohuan Fu
Lián Qiao
Wei Xue
Guangwen Yang
ODL
49
2
0
04 May 2019
1