ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
v1v2v3 (latest)

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Neural Information Processing Systems (NeurIPS), 2014
1 July 2014
Aaron Defazio
Francis R. Bach
Damien Scieur
    ODL
ArXiv (abs)PDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

50 / 878 papers shown
Fast Stochastic Variance Reduced Gradient Method with Momentum
  Acceleration for Machine Learning
Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning
Fanhua Shang
Yuanyuan Liu
James Cheng
Jiacheng Zhuo
ODL
201
24
0
23 Mar 2017
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient
  Descent
Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent
Fanhua Shang
Yuanyuan Liu
James Cheng
K. K. Ng
Yuichi Yoshida
184
3
0
20 Mar 2017
Riemannian stochastic quasi-Newton algorithm with variance reduction and
  its convergence analysis
Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis
Hiroyuki Kasai
Hiroyuki Sato
Bamdev Mishra
183
22
0
15 Mar 2017
Exploiting Strong Convexity from Data with Primal-Dual First-Order
  Algorithms
Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms
Jialei Wang
Lin Xiao
197
42
0
07 Mar 2017
Learn-and-Adapt Stochastic Dual Gradients for Network Resource
  Allocation
Learn-and-Adapt Stochastic Dual Gradients for Network Resource Allocation
Tianyi Chen
Qing Ling
G. Giannakis
154
22
0
05 Mar 2017
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for
  Regularized Empirical Risk Minimization
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
237
28
0
01 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
392
674
0
01 Mar 2017
Optimal algorithms for smooth and strongly convex distributed
  optimization in networks
Optimal algorithms for smooth and strongly convex distributed optimization in networksInternational Conference on Machine Learning (ICML), 2017
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
172
348
0
28 Feb 2017
Stochastic Variance Reduction Methods for Policy Evaluation
Stochastic Variance Reduction Methods for Policy EvaluationInternational Conference on Machine Learning (ICML), 2017
S. Du
Jianshu Chen
Lihong Li
Lin Xiao
Dengyong Zhou
OffRL
208
164
0
25 Feb 2017
Stochastic Composite Least-Squares Regression with convergence rate
  O(1/n)
Stochastic Composite Least-Squares Regression with convergence rate O(1/n)Annual Conference Computational Learning Theory (COLT), 2017
Nicolas Flammarion
Francis R. Bach
167
28
0
21 Feb 2017
Memory and Communication Efficient Distributed Stochastic Optimization
  with Minibatch-Prox
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-ProxAnnual Conference Computational Learning Theory (COLT), 2017
Jialei Wang
Weiran Wang
Nathan Srebro
240
54
0
21 Feb 2017
SAGA and Restricted Strong Convexity
SAGA and Restricted Strong Convexity
Chao Qu
Yan Li
Huan Xu
149
5
0
19 Feb 2017
Riemannian stochastic variance reduced gradient algorithm with
  retraction and vector transport
Riemannian stochastic variance reduced gradient algorithm with retraction and vector transportSIAM Journal on Optimization (SIAM J. Optim.), 2016
Hiroyuki Sato
Hiroyuki Kasai
Bamdev Mishra
327
59
0
18 Feb 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
  Non-Convex Parameter
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex ParameterInternational Conference on Machine Learning (ICML), 2017
Zeyuan Allen-Zhu
466
82
0
02 Feb 2017
IQN: An Incremental Quasi-Newton Method with Local Superlinear
  Convergence Rate
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence RateSIAM Journal on Optimization (SIAM J. Optim.), 2017
Aryan Mokhtari
Mark Eisen
Alejandro Ribeiro
203
76
0
02 Feb 2017
Linear convergence of SDCA in statistical estimation
Linear convergence of SDCA in statistical estimation
Chao Qu
Huan Xu
147
8
0
26 Jan 2017
An Asynchronous Parallel Approach to Sparse Recovery
An Asynchronous Parallel Approach to Sparse RecoveryInformation Theory and Applications Workshop (ITA), 2017
Deanna Needell
T. Woolf
90
4
0
12 Jan 2017
A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank
  Matrix Recovery
A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery
Lingxiao Wang
Xiao Zhang
Quanquan Gu
322
11
0
09 Jan 2017
Stochastic Variance-reduced Gradient Descent for Low-rank Matrix
  Recovery from Linear Measurements
Stochastic Variance-reduced Gradient Descent for Low-rank Matrix Recovery from Linear Measurements
Xiao Zhang
Lingxiao Wang
Quanquan Gu
224
6
0
02 Jan 2017
Asymptotic Optimality in Stochastic Optimization
Asymptotic Optimality in Stochastic OptimizationAnnals of Statistics (Ann. Stat.), 2016
John C. Duchi
Feng Ruan
198
67
0
16 Dec 2016
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme
  under Weak Strong Convexity Assumption
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption
Jie Liu
Martin Takáč
ODL
259
4
0
16 Dec 2016
Coupling Adaptive Batch Sizes with Learning Rates
Coupling Adaptive Batch Sizes with Learning RatesConference on Uncertainty in Artificial Intelligence (UAI), 2016
Lukas Balles
Javier Romero
Philipp Hennig
ODL
241
122
0
15 Dec 2016
Parsimonious Online Learning with Kernels via Sparse Projections in
  Function Space
Parsimonious Online Learning with Kernels via Sparse Projections in Function Space
Alec Koppel
Garrett A. Warnell
Ethan Stump
Alejandro Ribeiro
100
82
0
13 Dec 2016
Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems
Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems
Hoi-To Wai
Jean Lafond
Anna Scaglione
Eric Moulines
359
105
0
05 Dec 2016
Subsampled online matrix factorization with convergence guarantees
Subsampled online matrix factorization with convergence guarantees
A. Mensch
Julien Mairal
Gaël Varoquaux
Bertrand Thirion
147
2
0
30 Nov 2016
Scalable Adaptive Stochastic Optimization Using Random Projections
Scalable Adaptive Stochastic Optimization Using Random Projections
Gabriel Krummenacher
Brian McWilliams
Yannic Kilcher
J. M. Buhmann
N. Meinshausen
ODL
106
18
0
21 Nov 2016
Accelerated Variance Reduced Block Coordinate Descent
Accelerated Variance Reduced Block Coordinate Descent
Zebang Shen
Hui Qian
Chao Zhang
Tengfei Zhou
96
1
0
13 Nov 2016
Greedy Step Averaging: A parameter-free stochastic optimization method
Greedy Step Averaging: A parameter-free stochastic optimization method
Xiatian Zhang
Fan Yao
Yongjun Tian
112
0
0
11 Nov 2016
Linear Convergence of SVRG in Statistical Estimation
Linear Convergence of SVRG in Statistical Estimation
Chao Qu
Yan Li
Huan Xu
214
11
0
07 Nov 2016
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with
  Linear Convergence Rate
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
318
41
0
01 Nov 2016
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De
A. Yadav
David Jacobs
Tom Goldstein
ODL
445
63
0
18 Oct 2016
Analysis and Implementation of an Asynchronous Optimization Algorithm
  for the Parameter Server
Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server
Arda Aytekin
Hamid Reza Feyzmahdavian
M. Johansson
207
55
0
18 Oct 2016
Parallelizing Stochastic Gradient Descent for Least Squares Regression:
  mini-batching, averaging, and model misspecification
Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Prateek Jain
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
Aaron Sidford
MoMe
369
37
0
12 Oct 2016
Statistics of Robust Optimization: A Generalized Empirical Likelihood
  Approach
Statistics of Robust Optimization: A Generalized Empirical Likelihood ApproachMathematics of Operations Research (MOR), 2016
John C. Duchi
Peter Glynn
Hongseok Namkoong
450
348
0
11 Oct 2016
Sketching Meets Random Projection in the Dual: A Provable Recovery
  Algorithm for Big and High-dimensional Data
Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional DataInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2016
Jialei Wang
Jason D. Lee
M. Mahdavi
Mladen Kolar
Nathan Srebro
226
51
0
10 Oct 2016
Stochastic Alternating Direction Method of Multipliers with Variance
  Reduction for Nonconvex Optimization
Stochastic Alternating Direction Method of Multipliers with Variance Reduction for Nonconvex Optimization
Feihu Huang
Songcan Chen
Zhaosong Lu
414
16
0
10 Oct 2016
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
444
2,106
0
08 Oct 2016
Stochastic Averaging for Constrained Optimization with Application to
  Online Resource Allocation
Stochastic Averaging for Constrained Optimization with Application to Online Resource AllocationIEEE Transactions on Signal Processing (IEEE TSP), 2016
Tianyi Chen
Aryan Mokhtari
Xin Wang
Alejandro Ribeiro
G. Giannakis
226
50
0
07 Oct 2016
A SMART Stochastic Algorithm for Nonconvex Optimization with
  Applications to Robust Machine Learning
A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning
Aleksandr Aravkin
Damek Davis
352
21
0
04 Oct 2016
Stochastic Optimization with Variance Reduction for Infinite Datasets
  with Finite-Sum Structure
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum StructureNeural Information Processing Systems (NeurIPS), 2016
A. Bietti
Julien Mairal
506
36
0
04 Oct 2016
An Inexact Variable Metric Proximal Point Algorithm for Generic
  Quasi-Newton Acceleration
An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton AccelerationSIAM Journal on Optimization (SIAM J. Optim.), 2016
Hongzhou Lin
Julien Mairal
Zaïd Harchaoui
271
13
0
04 Oct 2016
A Primer on Coordinate Descent Algorithms
A Primer on Coordinate Descent Algorithms
Hao-Jun Michael Shi
Shenyinying Tu
Yangyang Xu
W. Yin
308
96
0
30 Sep 2016
Decoupled Asynchronous Proximal Stochastic Gradient Descent with
  Variance Reduction
Decoupled Asynchronous Proximal Stochastic Gradient Descent with Variance Reduction
Zhouyuan Huo
Bin Gu
Heng-Chiao Huang
122
4
0
22 Sep 2016
Gray-box inference for structured Gaussian process models
Gray-box inference for structured Gaussian process models
P. Galliani
Amir Dezfouli
Edwin V. Bonilla
Novi Quadrianto
BDL
93
4
0
14 Sep 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient
  Method
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Sai Li
236
100
0
12 Sep 2016
AIDE: Fast and Communication Efficient Distributed Optimization
AIDE: Fast and Communication Efficient Distributed Optimization
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
178
152
0
24 Aug 2016
A Richer Theory of Convex Constrained Optimization with Reduced
  Projections and Improved Rates
A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates
Tianbao Yang
Qihang Lin
Lijun Zhang
229
28
0
11 Aug 2016
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
208
150
0
27 Jul 2016
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
Stochastic Quasi-Newton Methods for Nonconvex Stochastic OptimizationSIAM Journal on Optimization (SIAM J. Optim.), 2016
Tianlin Li
Shiqian Ma
Shiqian Ma
Wen Liu
328
187
0
05 Jul 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth
  Condition
Accelerate Stochastic Subgradient Method by Leveraging Local Growth ConditionAnalysis and Applications (AA), 2016
Yi Tian Xu
Qihang Lin
Tianbao Yang
416
11
0
04 Jul 2016
Previous
123...15161718
Next