Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1312.1666
Cited By
Semi-Stochastic Gradient Descent Methods
5 December 2013
Jakub Konecný
Peter Richtárik
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Semi-Stochastic Gradient Descent Methods"
50 / 50 papers shown
Title
Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients
Sachin Garg
A. Berahas
Michal Dereziñski
46
1
0
23 Apr 2024
Stochastic Gradient Methods with Preconditioned Updates
Abdurakhmon Sadiev
Aleksandr Beznosikov
Abdulla Jasem Almansoori
Dmitry Kamzolov
R. Tappenden
Martin Takáč
ODL
39
9
0
01 Jun 2022
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
27
1
0
30 Sep 2021
Physics-informed Dyna-Style Model-Based Deep Reinforcement Learning for Dynamic Control
Xin-Yang Liu
Jian-Xun Wang
AI4CE
31
38
0
31 Jul 2021
Stochastic Polyak Stepsize with a Moving Target
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
32
17
0
22 Jun 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark Schmidt
Simon Lacoste-Julien
23
18
0
18 Feb 2021
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark Schmidt
Francis R. Bach
Peter Richtárik
24
112
0
02 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
27
4
0
13 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
40
17
0
11 Feb 2020
A Unifying Framework for Variance Reduction Algorithms for Finding Zeroes of Monotone Operators
Xun Zhang
W. Haskell
Z. Ye
25
3
0
22 Jun 2019
Why gradient clipping accelerates training: A theoretical justification for adaptivity
Jiaming Zhang
Tianxing He
S. Sra
Ali Jadbabaie
30
446
0
28 May 2019
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
20
2
0
21 Mar 2019
Sparse Regression and Adaptive Feature Generation for the Discovery of Dynamical Systems
C. S. Kulkarni
23
10
0
07 Feb 2019
SAGA with Arbitrary Sampling
Xun Qian
Zheng Qu
Peter Richtárik
37
25
0
24 Jan 2019
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
Jiaming Zhang
Hongyi Zhang
S. Sra
26
39
0
10 Nov 2018
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy
Majid Jahani
Xi He
Chenxin Ma
Aryan Mokhtari
Dheevatsa Mudigere
Alejandro Ribeiro
Martin Takáč
24
18
0
26 Oct 2018
A fast quasi-Newton-type method for large-scale stochastic optimisation
A. Wills
Carl Jidling
Thomas B. Schon
ODL
36
7
0
29 Sep 2018
An Improvement of Data Classification Using Random Multimodel Deep Learning (RMDL)
Mojtaba Heidarysafa
Kamran Kowsari
Donald E. Brown
K. Meimandi
Laura E. Barnes
27
36
0
23 Aug 2018
Improved asynchronous parallel optimization analysis for stochastic incremental methods
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
16
70
0
11 Jan 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
24
200
0
27 Dec 2017
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method
Mark Eisen
Aryan Mokhtari
Alejandro Ribeiro
35
16
0
22 May 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
42
598
0
01 Mar 2017
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
37
36
0
01 Nov 2016
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
71
1,877
0
08 Oct 2016
AIDE: Fast and Communication Efficient Distributed Optimization
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
19
150
0
24 Aug 2016
ASAGA: Asynchronous Parallel SAGA
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
AI4TS
31
101
0
15 Jun 2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy
Aryan Mokhtari
Alejandro Ribeiro
ODL
25
32
0
24 May 2016
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds
Hongyi Zhang
Sashank J. Reddi
S. Sra
38
240
0
23 May 2016
Barzilai-Borwein Step Size for Stochastic Gradient Descent
Conghui Tan
Shiqian Ma
Yuhong Dai
Yuqiu Qian
40
182
0
13 May 2016
Trading-off variance and complexity in stochastic gradient descent
Vatsal Shah
Megasthenis Asteris
Anastasios Kyrillidis
Sujay Sanghavi
25
13
0
22 Mar 2016
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
30
121
0
08 Feb 2016
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
32
113
0
06 Feb 2016
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters
Zeyuan Allen-Zhu
Yang Yuan
Karthik Sridharan
20
27
0
05 Feb 2016
Kalman-based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning
V. Patel
31
35
0
03 Dec 2015
Federated Optimization:Distributed Optimization Beyond the Datacenter
Jakub Konecný
H. B. McMahan
Daniel Ramage
FedML
28
728
0
11 Nov 2015
Stop Wasting My Gradients: Practical SVRG
Reza Babanezhad
Mohamed Osama Ahmed
Alim Virani
Mark Schmidt
Jakub Konecný
Scott Sallinen
15
134
0
05 Nov 2015
New Optimisation Methods for Machine Learning
Aaron Defazio
46
6
0
09 Oct 2015
Scalable Computation of Regularized Precision Matrices via Stochastic Optimization
Yves F. Atchadé
Rahul Mazumder
Jie-bin Chen
7
8
0
01 Sep 2015
Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
Jason D. Lee
Qihang Lin
Tengyu Ma
Tianbao Yang
FedML
29
16
0
27 Jul 2015
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
40
194
0
23 Jun 2015
Variance Reduced Stochastic Gradient Descent with Neighbors
Thomas Hofmann
Aurelien Lucchi
Simon Lacoste-Julien
Brian McWilliams
ODL
33
153
0
11 Jun 2015
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Jakub Konecný
Jie Liu
Peter Richtárik
Martin Takáč
ODL
36
273
0
16 Apr 2015
A Variance Reduced Stochastic Newton Method
Aurelien Lucchi
Brian McWilliams
Thomas Hofmann
ODL
33
50
0
28 Mar 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Dominik Csiba
Zheng Qu
Peter Richtárik
ODL
61
97
0
27 Feb 2015
SDCA without Duality
Shai Shalev-Shwartz
27
47
0
22 Feb 2015
Global Convergence of Online Limited Memory BFGS
Aryan Mokhtari
Alejandro Ribeiro
32
164
0
06 Sep 2014
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
114
1,244
0
10 Sep 2013
1