Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1407.0202
Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
1 July 2014
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"
50 / 353 papers shown
Title
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
A. Bietti
Julien Mairal
47
36
0
04 Oct 2016
An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration
Hongzhou Lin
Julien Mairal
Zaïd Harchaoui
33
13
0
04 Oct 2016
A Primer on Coordinate Descent Algorithms
Hao-Jun Michael Shi
Shenyinying Tu
Yangyang Xu
W. Yin
40
90
0
30 Sep 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
29
96
0
12 Sep 2016
AIDE: Fast and Communication Efficient Distributed Optimization
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
19
150
0
24 Aug 2016
A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates
Tianbao Yang
Qihang Lin
Lijun Zhang
19
25
0
11 Aug 2016
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
32
139
0
27 Jul 2016
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
Tianlin Li
Shiqian Ma
D. Goldfarb
Wen Liu
26
177
0
05 Jul 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Yi Tian Xu
Qihang Lin
Tianbao Yang
28
11
0
04 Jul 2016
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Yossi Arjevani
Ohad Shamir
24
24
0
30 Jun 2016
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
120
3,178
0
15 Jun 2016
ASAGA: Asynchronous Parallel SAGA
Rémi Leblond
Fabian Pedregosa
Simon Lacoste-Julien
AI4TS
31
101
0
15 Jun 2016
Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization
Xiyu Yu
Dacheng Tao
32
5
0
02 Jun 2016
Level Up Your Strategy: Towards a Descriptive Framework for Meaningful Enterprise Gamification
Xinghao Pan
33
62
0
29 May 2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy
Aryan Mokhtari
Alejandro Ribeiro
ODL
25
32
0
24 May 2016
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds
Hongyi Zhang
Sashank J. Reddi
S. Sra
47
240
0
23 May 2016
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
31
54
0
23 May 2016
Accelerated Randomized Mirror Descent Algorithms For Composite Non-strongly Convex Optimization
L. Hien
Cuong V Nguyen
Huan Xu
Canyi Lu
Jiashi Feng
28
19
0
23 May 2016
Stochastic Variance Reduction Methods for Saddle-Point Problems
B. Palaniappan
Francis R. Bach
23
210
0
20 May 2016
Barzilai-Borwein Step Size for Stochastic Gradient Descent
Conghui Tan
Shiqian Ma
Yuhong Dai
Yuqiu Qian
45
182
0
13 May 2016
On the Iteration Complexity of Oblivious First-Order Optimization Algorithms
Yossi Arjevani
Ohad Shamir
34
33
0
11 May 2016
Stochastic Variance-Reduced ADMM
Shuai Zheng
James T. Kwok
44
59
0
24 Apr 2016
A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
Shun Zheng
Jialei Wang
Fen Xia
Wenyuan Xu
Tong Zhang
23
22
0
13 Apr 2016
Trading-off variance and complexity in stochastic gradient descent
Vatsal Shah
Megasthenis Asteris
Anastasios Kyrillidis
Sujay Sanghavi
27
13
0
22 Mar 2016
Fast Incremental Method for Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
33
44
0
19 Mar 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
35
577
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
38
390
0
17 Mar 2016
Optimal Black-Box Reductions Between Optimization Objectives
Zeyuan Allen-Zhu
Elad Hazan
27
96
0
17 Mar 2016
On the Influence of Momentum Acceleration on Online Learning
Kun Yuan
Bicheng Ying
Ali H. Sayed
37
58
0
14 Mar 2016
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
30
121
0
08 Feb 2016
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
32
113
0
06 Feb 2016
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters
Zeyuan Allen-Zhu
Yang Yuan
Karthik Sridharan
20
27
0
05 Feb 2016
Adaptive Algorithms for Online Convex Optimization with Long-term Constraints
Rodolphe Jenatton
Jim C. Huang
Cédric Archambeau
17
156
0
23 Dec 2015
RSG: Beating Subgradient Method without Smoothness and Strong Convexity
Tianbao Yang
Qihang Lin
32
84
0
09 Dec 2015
Stop Wasting My Gradients: Practical SVRG
Reza Babanezhad
Mohamed Osama Ahmed
Alim Virani
Mark Schmidt
Jakub Konecný
Scott Sallinen
17
134
0
05 Nov 2015
New Optimisation Methods for Machine Learning
Aaron Defazio
46
6
0
09 Oct 2015
A Linearly-Convergent Stochastic L-BFGS Algorithm
Philipp Moritz
Robert Nishihara
Michael I. Jordan
ODL
40
233
0
09 Aug 2015
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
34
220
0
08 Jul 2015
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
40
194
0
23 Jun 2015
Variance Reduced Stochastic Gradient Descent with Neighbors
Thomas Hofmann
Aurelien Lucchi
Simon Lacoste-Julien
Brian McWilliams
ODL
39
153
0
11 Jun 2015
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Zeyuan Allen-Zhu
Yang Yuan
37
195
0
05 Jun 2015
Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields
Mark Schmidt
Reza Babanezhad
Mohamed Osama Ahmed
Aaron Defazio
Ann Clifton
Anoop Sarkar
40
83
0
16 Apr 2015
A Variance Reduced Stochastic Newton Method
Aurelien Lucchi
Brian McWilliams
Thomas Hofmann
ODL
38
50
0
28 Mar 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Dominik Csiba
Zheng Qu
Peter Richtárik
ODL
61
97
0
27 Feb 2015
SDCA without Duality
Shai Shalev-Shwartz
32
47
0
22 Feb 2015
Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss
Yuchen Zhang
Lin Xiao
46
72
0
01 Jan 2015
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Zheng Qu
Peter Richtárik
Tong Zhang
46
58
0
21 Nov 2014
A Lower Bound for the Optimization of Finite Sums
Alekh Agarwal
Léon Bottou
41
124
0
02 Oct 2014
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
43
261
0
10 Sep 2014
A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Pascal Bianchi
W. Hachem
F. Iutzeler
70
57
0
03 Jul 2014
Previous
1
2
3
4
5
6
7
8
Next