Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1407.0202
Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
1 July 2014
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"
50 / 353 papers shown
Title
Solving Empirical Risk Minimization in the Current Matrix Multiplication Time
Y. Lee
Zhao Song
Qiuyi Zhang
24
115
0
11 May 2019
Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization
Baojian Zhou
F. Chen
Yiming Ying
34
7
0
09 May 2019
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
19
34
0
01 May 2019
Reducing Noise in GAN Training with Variance Reduced Extragradient
Tatjana Chavdarova
Gauthier Gidel
François Fleuret
Simon Lacoste-Julien
25
135
0
18 Apr 2019
On the Adaptivity of Stochastic Gradient-Based Optimization
Lihua Lei
Michael I. Jordan
ODL
24
22
0
09 Apr 2019
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
20
2
0
21 Mar 2019
An Empirical Study of Large-Batch Stochastic Gradient Descent with Structured Covariance Noise
Yeming Wen
Kevin Luk
Maxime Gazeau
Guodong Zhang
Harris Chan
Jimmy Ba
ODL
25
22
0
21 Feb 2019
Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization
Feihu Huang
Bin Gu
Zhouyuan Huo
Songcan Chen
Heng-Chiao Huang
14
26
0
16 Feb 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
18
139
0
15 Feb 2019
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
24
40
0
28 Jan 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
FedML
21
26
0
28 Jan 2019
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it
Konstantin Mishchenko
Filip Hanzely
Peter Richtárik
16
13
0
27 Jan 2019
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
34
44
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
36
155
0
24 Jan 2019
SAGA with Arbitrary Sampling
Xun Qian
Zheng Qu
Peter Richtárik
37
25
0
24 Jan 2019
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Yi Zhou
Junjie Yang
Huishuai Zhang
Yingbin Liang
Vahid Tarokh
22
71
0
02 Jan 2019
On the Ineffectiveness of Variance Reduced Optimization for Deep Learning
Aaron Defazio
Léon Bottou
UQCV
DRL
23
112
0
11 Dec 2018
Asynchronous Stochastic Composition Optimization with Variance Reduction
Shuheng Shen
Linli Xu
Jingchang Liu
Junliang Guo
Qing Ling
27
2
0
15 Nov 2018
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
J.N. Zhang
Hongyi Zhang
S. Sra
26
39
0
10 Nov 2018
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
33
66
0
10 Nov 2018
Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy
Majid Jahani
Xi He
Chenxin Ma
Aryan Mokhtari
Dheevatsa Mudigere
Alejandro Ribeiro
Martin Takáč
30
18
0
26 Oct 2018
SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms
Zhe Wang
Kaiyi Ji
Yi Zhou
Yingbin Liang
Vahid Tarokh
ODL
35
81
0
25 Oct 2018
Multi-Agent Fully Decentralized Value Function Learning with Linear Convergence Rates
Lucas Cassano
Kun Yuan
Ali H. Sayed
22
39
0
17 Oct 2018
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark Schmidt
30
296
0
16 Oct 2018
Quasi-hyperbolic momentum and Adam for deep learning
Jerry Ma
Denis Yarats
ODL
84
129
0
16 Oct 2018
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD
Marten van Dijk
Lam M. Nguyen
Phuong Ha Nguyen
Dzung Phan
36
6
0
09 Oct 2018
POLO: a POLicy-based Optimization library
Arda Aytekin
Martin Biel
M. Johansson
25
3
0
08 Oct 2018
Accelerating Stochastic Gradient Descent Using Antithetic Sampling
Jingchang Liu
Linli Xu
19
2
0
07 Oct 2018
Continuous-time Models for Stochastic Optimization Algorithms
Antonio Orvieto
Aurelien Lucchi
19
31
0
05 Oct 2018
A fast quasi-Newton-type method for large-scale stochastic optimisation
A. Wills
Carl Jidling
Thomas B. Schon
ODL
36
7
0
29 Sep 2018
Optimal Matrix Momentum Stochastic Approximation and Applications to Q-learning
Adithya M. Devraj
Ana Bušić
Sean P. Meyn
22
4
0
17 Sep 2018
SEGA: Variance Reduction via Gradient Sketching
Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
25
71
0
09 Sep 2018
On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches
Jie Liu
Yu Rong
Martin Takáč
Junzhou Huang
ODL
38
7
0
14 Jul 2018
Dual optimization for convex constrained objectives without the gradient-Lipschitz assumption
Martin Bompaire
Emmanuel Bacry
Stéphane Gaïffas
30
6
0
10 Jul 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
50
571
0
04 Jul 2018
Quasi-Monte Carlo Variational Inference
Alexander K. Buchholz
F. Wenzel
Stephan Mandt
BDL
30
58
0
04 Jul 2018
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates
Kaiwen Zhou
Fanhua Shang
James Cheng
24
74
0
28 Jun 2018
A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm
Konstantin Mishchenko
F. Iutzeler
J. Malick
21
22
0
25 Jun 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Laplacian Smoothing Gradient Descent
Stanley Osher
Bao Wang
Penghang Yin
Xiyang Luo
Farzin Barekat
Minh Pham
A. Lin
ODL
22
43
0
17 Jun 2018
Stochastic Variance-Reduced Policy Gradient
Matteo Papini
Damiano Binaghi
Giuseppe Canonaco
Matteo Pirotta
Marcello Restelli
21
174
0
14 Jun 2018
Towards Riemannian Accelerated Gradient Methods
Hongyi Zhang
S. Sra
21
53
0
07 Jun 2018
Doubly Robust Bayesian Inference for Non-Stationary Streaming Data with
β
β
β
-Divergences
Jeremias Knoblauch
Jack Jewson
Theodoros Damoulas
25
56
0
06 Jun 2018
Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization
Hoi-To Wai
Zhuoran Yang
Zhaoran Wang
Mingyi Hong
30
169
0
03 Jun 2018
Nonlinear Acceleration of CNNs
Damien Scieur
Edouard Oyallon
Alexandre d’Aspremont
Francis R. Bach
23
11
0
01 Jun 2018
Stochastic algorithms with descent guarantees for ICA
Pierre Ablin
Alexandre Gramfort
J. Cardoso
Francis R. Bach
CML
18
7
0
25 May 2018
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
Zebang Shen
Aryan Mokhtari
Tengfei Zhou
P. Zhao
Hui Qian
30
56
0
25 May 2018
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen
G. Giannakis
Tao Sun
W. Yin
34
297
0
25 May 2018
D
2
^2
2
: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
20
348
0
19 Mar 2018
Constrained Deep Learning using Conditional Gradient and Applications in Computer Vision
Sathya Ravi
Tuan Dinh
Vishnu Suresh Lokhande
Vikas Singh
AI4CE
33
22
0
17 Mar 2018
Previous
1
2
3
4
5
6
7
8
Next