Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1905.13096
Cited By
v1
v2 (latest)
Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1
International Conference on Machine Learning, Optimization, and Data Science (MOD), 2019
30 May 2019
Majid Jahani
M. Nazari
S. Rusakov
A. Berahas
Martin Takávc
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1"
8 / 8 papers shown
Scaling up Stochastic Gradient Descent for Non-convex Optimisation
Machine-mediated learning (ML), 2022
S. Mohamad
H. Alamri
A. Bouchachia
216
3
0
06 Oct 2022
Stochastic Gradient Methods with Preconditioned Updates
Journal of Optimization Theory and Applications (JOTA), 2022
Abdurakhmon Sadiev
Aleksandr Beznosikov
Abdulla Jasem Almansoori
Dmitry Kamzolov
R. Tappenden
Martin Takáč
ODL
275
12
0
01 Jun 2022
On the efficiency of Stochastic Quasi-Newton Methods for Deep Learning
M. Yousefi
Angeles Martinez
ODL
131
1
0
18 May 2022
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information
Majid Jahani
S. Rusakov
Zheng Shi
Peter Richtárik
Michael W. Mahoney
Martin Takávc
ODL
190
29
0
11 Sep 2021
SONIA: A Symmetric Blockwise Truncated Optimization Algorithm
Majid Jahani
M. Nazari
R. Tappenden
A. Berahas
Martin Takávc
ODL
148
10
0
06 Jun 2020
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
232
69
0
21 May 2020
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
378
49
0
28 Jan 2019
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy
Majid Jahani
Xi He
Chenxin Ma
Aryan Mokhtari
Dheevatsa Mudigere
Alejandro Ribeiro
Martin Takáč
221
19
0
26 Oct 2018
1