Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1708.07164
Cited By
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
23 August 2017
Peng Xu
Farbod Roosta-Khorasani
Michael W. Mahoney
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information"
31 / 31 papers shown
Title
SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning
Jingruo Sun
Zachary Frangella
Madeleine Udell
31
0
0
28 Jan 2025
Cubic regularized subspace Newton for non-convex optimization
Jim Zhao
Aurélien Lucchi
N. Doikov
20
5
0
24 Jun 2024
On Newton's Method to Unlearn Neural Networks
Nhung Bui
Xinyang Lu
Rachael Hwee Ling Sim
See-Kiong Ng
Bryan Kian Hsiang Low
MU
39
2
0
20 Jun 2024
Level Set Teleportation: An Optimization Perspective
Aaron Mishkin
A. Bietti
Robert Mansel Gower
33
1
0
05 Mar 2024
Second-Order Fine-Tuning without Pain for LLMs:A Hessian Informed Zeroth-Order Optimizer
Yanjun Zhao
Sizhe Dang
Haishan Ye
Guang Dai
Yi Qian
Ivor W.Tsang
66
8
0
23 Feb 2024
Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods
El Mahdi Chayti
N. Doikov
Martin Jaggi
ODL
24
5
0
23 Feb 2023
Faster Riemannian Newton-type Optimization by Subsampling and Cubic Regularization
Yian Deng
Tingting Mu
19
1
0
22 Feb 2023
Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee
Tianyi Lin
P. Mertikopoulos
Michael I. Jordan
24
11
0
23 Oct 2022
Augmented Newton Method for Optimization: Global Linear Rate and Momentum Interpretation
M. Morshed
ODL
14
1
0
23 May 2022
Efficient Convex Optimization Requires Superlinear Memory
A. Marsden
Vatsal Sharan
Aaron Sidford
Gregory Valiant
24
14
0
29 Mar 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
20
8
0
18 Feb 2022
Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization
Raghu Bollapragada
Stefan M. Wild
27
11
0
24 Sep 2021
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
Michal Derezinski
Jonathan Lacotte
Mert Pilanci
Michael W. Mahoney
32
26
0
15 Jul 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
24
0
0
26 Aug 2020
Precise expressions for random projections: Low-rank approximation and randomized Newton
Michal Derezinski
Feynman T. Liang
Zhenyu A. Liao
Michael W. Mahoney
19
23
0
18 Jun 2020
A block coordinate descent optimizer for classification problems exploiting convexity
Ravi G. Patel
N. Trask
Mamikon A. Gulian
E. Cyr
ODL
22
7
0
17 Jun 2020
Adaptive Stochastic Optimization
Frank E. Curtis
K. Scheinberg
ODL
6
29
0
18 Jan 2020
Global Convergence of Policy Gradient Methods to (Almost) Locally Optimal Policies
K. Zhang
Alec Koppel
Haoqi Zhu
Tamer Basar
28
186
0
19 Jun 2019
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
8
40
0
28 Jan 2019
A note on solving nonlinear optimization problems in variable precision
Serge Gratton
P. Toint
17
13
0
09 Dec 2018
Convergence of Cubic Regularization for Nonconvex Optimization under KL Property
Yi Zhou
Zhe Wang
Yingbin Liang
24
23
0
22 Aug 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Local Saddle Point Optimization: A Curvature Exploitation Approach
Leonard Adolphs
Hadi Daneshmand
Aurélien Lucchi
Thomas Hofmann
17
107
0
15 May 2018
Escaping Saddles with Stochastic Gradients
Hadi Daneshmand
Jonas Köhler
Aurélien Lucchi
Thomas Hofmann
19
161
0
15 Mar 2018
GPU Accelerated Sub-Sampled Newton's Method
Sudhir B. Kylasa
Farbod Roosta-Khorasani
Michael W. Mahoney
A. Grama
ODL
18
8
0
26 Feb 2018
Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization
Zhe Wang
Yi Zhou
Yingbin Liang
Guanghui Lan
29
46
0
20 Feb 2018
NEON+: Accelerated Gradient Methods for Extracting Negative Curvature for Non-Convex Optimization
Yi Tian Xu
R. L. Jin
Tianbao Yang
16
25
0
04 Dec 2017
On Noisy Negative Curvature Descent: Competing with Gradient Descent for Faster Non-convex Optimization
Mingrui Liu
Tianbao Yang
28
23
0
25 Sep 2017
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization
Shusen Wang
Farbod Roosta-Khorasani
Peng Xu
Michael W. Mahoney
18
127
0
11 Sep 2017
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
84
736
0
19 Mar 2014
1