Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1806.00413
Cited By
Global linear convergence of Newton's method without strong-convexity or Lipschitz gradients
1 June 2018
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Global linear convergence of Newton's method without strong-convexity or Lipschitz gradients"
33 / 33 papers shown
NeST-BO: Fast Local Bayesian Optimization via Newton-Step Targeting of Gradient and Hessian Information
Wei-Ting Tang
Akshay Kudva
J. Paulson
134
1
0
30 Mar 2026
A Split-Client Approach to Second-Order Optimization
El Mahdi Chayti
Martin Jaggi
198
0
0
17 Oct 2025
Solving Zero-Sum Games with Fewer Matrix-Vector Products
Ishani Karmarkar
Liam O'Carroll
Aaron Sidford
55
2
0
04 Sep 2025
SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning
Jingruo Sun
Zachary Frangella
Madeleine Udell
308
3
0
28 Jan 2025
Regularized Gauss-Newton for Optimizing Overparameterized Neural Networks
Adeyemi Damilare Adeoye
Philipp Christian Petersen
Alberto Bemporad
358
2
0
23 Apr 2024
Level Set Teleportation: An Optimization Perspective
Aaron Mishkin
A. Bietti
Robert Mansel Gower
354
1
0
05 Mar 2024
Unnatural Algorithms in Machine Learning
Christian Goodbrake
145
0
0
07 Dec 2023
Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy
Yingyu Lin
Yian Ma
Yu-Xiang Wang
Rachel Redberg
Zhiqi Bu
368
4
0
23 Oct 2023
Minimizing Quasi-Self-Concordant Functions by Gradient Regularization of Newton Method
Mathematical programming (Math. Program.), 2023
N. Doikov
347
12
0
28 Aug 2023
Gradient Descent Converges Linearly for Logistic Regression on Separable Data
International Conference on Machine Learning (ICML), 2023
Kyriakos Axiotis
M. Sviridenko
MLT
315
8
0
26 Jun 2023
Faster Differentially Private Convex Optimization via Second-Order Methods
Neural Information Processing Systems (NeurIPS), 2023
Arun Ganesh
Mahdi Haghifam
Thomas Steinke
Abhradeep Thakurta
262
17
0
22 May 2023
Sketch-and-Project Meets Newton Method: Global
O
(
k
−
2
)
\mathcal O(k^{-2})
O
(
k
−
2
)
Convergence with Low-Rank Updates
Slavomír Hanzely
335
7
0
22 May 2023
Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods
El Mahdi Chayti
N. Doikov
Martin Jaggi
ODL
611
17
0
23 Feb 2023
Second-order optimization with lazy Hessians
International Conference on Machine Learning (ICML), 2022
N. Doikov
El Mahdi Chayti
Martin Jaggi
423
30
0
01 Dec 2022
Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods
Neural Information Processing Systems (NeurIPS), 2022
Kimon Antonakopoulos
Ali Kavis
Volkan Cevher
ODL
398
14
0
03 Nov 2022
FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning
International Conference on Machine Learning (ICML), 2022
Anis Elgabli
Chaouki Ben Issaid
Amrit Singh Bedi
K. Rajawat
M. Bennis
Vaneet Aggarwal
FedML
222
43
0
17 Jun 2022
Augmented Newton Method for Optimization: Global Linear Rate and Momentum Interpretation
M. Morshed
ODL
220
1
0
23 May 2022
A Stochastic Newton Algorithm for Distributed Convex Optimization
Brian Bullins
Kumar Kshitij Patel
Ohad Shamir
Nathan Srebro
Blake E. Woodworth
209
17
0
07 Oct 2021
Curvature-Aware Derivative-Free Optimization
Journal of Scientific Computing (J. Sci. Comput.), 2021
Bumsu Kim
HanQin Cai
Daniel McKenzie
W. Yin
ODL
393
14
0
27 Sep 2021
Differentially private inference via noisy optimization
Annals of Statistics (Ann. Stat.), 2021
Marco Avella-Medina
Casey Bradshaw
Po-Ling Loh
FedML
563
37
0
19 Mar 2021
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
Annual Conference Computational Learning Theory (COLT), 2021
Blake E. Woodworth
Brian Bullins
Ohad Shamir
Nathan Srebro
347
49
0
02 Feb 2021
Asynchronous Parallel Stochastic Quasi-Newton Methods
Parallel Computing (PC), 2020
Qianqian Tong
Guannan Liang
Xingyu Cai
Chunjiang Zhu
J. Bi
ODL
308
10
0
02 Nov 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
242
0
0
26 Aug 2020
SnapBoost: A Heterogeneous Boosting Machine
Thomas Parnell
Andreea Anghel
M. Lazuka
Nikolas Ioannou
Sebastian Kurella
Peshal Agarwal
N. Papandreou
Haralambos Pozidis
215
0
0
17 Jun 2020
Stochastic Subspace Cubic Newton Method
International Conference on Machine Learning (ICML), 2020
Filip Hanzely
N. Doikov
Peter Richtárik
Y. Nesterov
220
62
0
21 Feb 2020
Second-order Conditional Gradient Sliding
Alejandro Carderera
Sebastian Pokutta
533
13
0
20 Feb 2020
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
D. Kovalev
Konstantin Mishchenko
Peter Richtárik
ODL
233
53
0
03 Dec 2019
Fast and Furious Convergence: Stochastic Second Order Methods under Interpolation
International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
S. Meng
Sharan Vaswani
I. Laradji
Mark Schmidt
Damien Scieur
324
39
0
11 Oct 2019
Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses
Ulysse Marteau-Ferey
Francis R. Bach
Alessandro Rudi
275
40
0
03 Jul 2019
Accelerating Gradient Boosting Machine
Haihao Lu
Sai Praneeth Karimireddy
Natalia Ponomareva
Vahab Mirrokni
AI4CE
310
12
0
20 Mar 2019
Deterministic Inequalities for Smooth M-estimators
Arun K. Kuchibhotla
259
8
0
13 Sep 2018
A Distributed Second-Order Algorithm You Can Trust
Celestine Mendler-Dünner
Aurelien Lucchi
Matilde Gargiani
An Bian
Thomas Hofmann
Martin Jaggi
197
33
0
20 Jun 2018
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
Virginia Smith
Simone Forte
Chenxin Ma
Martin Takáč
Sai Li
Martin Jaggi
371
282
0
07 Nov 2016
1
Page 1 of 1