ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
v1v2v3 (latest)

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Neural Information Processing Systems (NeurIPS), 2014
1 July 2014
Aaron Defazio
Francis R. Bach
Damien Scieur
    ODL
ArXiv (abs)PDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

50 / 878 papers shown
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Dimension-Free Iteration Complexity of Finite Sum Optimization ProblemsNeural Information Processing Systems (NeurIPS), 2016
Yossi Arjevani
Ohad Shamir
149
25
0
30 Jun 2016
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale
  Learning
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
Aryan Mokhtari
Alec Koppel
Alejandro Ribeiro
167
15
0
15 Jun 2016
Optimization Methods for Large-Scale Machine Learning
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
821
3,554
0
15 Jun 2016
ASAGA: Asynchronous Parallel SAGA
ASAGA: Asynchronous Parallel SAGA
Rémi Leblond
Fabian Pedregosa
Damien Scieur
AI4TS
255
106
0
15 Jun 2016
Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization
Xiyu Yu
Dacheng Tao
194
5
0
02 Jun 2016
Distributed Asynchronous Dual Free Stochastic Dual Coordinate Ascent
Distributed Asynchronous Dual Free Stochastic Dual Coordinate Ascent
Zhouyuan Huo
Heng-Chiao Huang
316
1
0
29 May 2016
Level Up Your Strategy: Towards a Descriptive Framework for Meaningful
  Enterprise Gamification
Level Up Your Strategy: Towards a Descriptive Framework for Meaningful Enterprise Gamification
Xinghao Pan
200
63
0
29 May 2016
NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and
  Stochastic Optimization
NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization
Davood Hajinezhad
Mingyi Hong
T. Zhao
Zhaoran Wang
139
45
0
25 May 2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical
  Accuracy
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy
Aryan Mokhtari
Alejandro Ribeiro
ODL
185
33
0
24 May 2016
Riemannian stochastic variance reduced gradient on Grassmann manifold
Riemannian stochastic variance reduced gradient on Grassmann manifold
Hiroyuki Kasai
Hiroyuki Sato
Bamdev Mishra
307
22
0
24 May 2016
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds
Hongyi Zhang
Sashank J. Reddi
S. Sra
250
255
0
23 May 2016
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
198
54
0
23 May 2016
Accelerated Randomized Mirror Descent Algorithms For Composite
  Non-strongly Convex Optimization
Accelerated Randomized Mirror Descent Algorithms For Composite Non-strongly Convex Optimization
L. Hien
Cuong V Nguyen
Huan Xu
Canyi Lu
Jiashi Feng
313
19
0
23 May 2016
DynaNewton - Accelerating Newton's Method for Machine Learning
DynaNewton - Accelerating Newton's Method for Machine Learning
Hadi Daneshmand
Aurelien Lucchi
Thomas Hofmann
70
3
0
20 May 2016
Stochastic Variance Reduction Methods for Saddle-Point Problems
Stochastic Variance Reduction Methods for Saddle-Point Problems
B. Palaniappan
Francis R. Bach
316
220
0
20 May 2016
A Multi-Batch L-BFGS Method for Machine Learning
A Multi-Batch L-BFGS Method for Machine Learning
A. Berahas
J. Nocedal
Martin Takáč
ODL
270
121
0
19 May 2016
Barzilai-Borwein Step Size for Stochastic Gradient Descent
Barzilai-Borwein Step Size for Stochastic Gradient Descent
Conghui Tan
Shiqian Ma
Yuhong Dai
Yuqiu Qian
273
197
0
13 May 2016
On the Iteration Complexity of Oblivious First-Order Optimization
  Algorithms
On the Iteration Complexity of Oblivious First-Order Optimization Algorithms
Yossi Arjevani
Ohad Shamir
176
34
0
11 May 2016
Nonconvex Sparse Learning via Stochastic Optimization with Progressive
  Variance Reduction
Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction
Xingguo Li
R. Arora
Han Liu
Jarvis Haupt
T. Zhao
290
71
0
09 May 2016
A Tight Bound of Hard Thresholding
A Tight Bound of Hard Thresholding
Jie Shen
Ping Li
233
101
0
05 May 2016
Stochastic Variance-Reduced ADMM
Stochastic Variance-Reduced ADMM
Shuai Zheng
James T. Kwok
308
62
0
24 Apr 2016
A General Distributed Dual Coordinate Optimization Framework for
  Regularized Loss Minimization
A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
Shun Zheng
Jialei Wang
Fen Xia
Wenyuan Xu
Tong Zhang
252
22
0
13 Apr 2016
Asynchronous Stochastic Gradient Descent with Variance Reduction for
  Non-Convex Optimization
Asynchronous Stochastic Gradient Descent with Variance Reduction for Non-Convex Optimization
Zhouyuan Huo
Heng-Chiao Huang
221
49
0
12 Apr 2016
Trading-off variance and complexity in stochastic gradient descent
Trading-off variance and complexity in stochastic gradient descent
Vatsal Shah
Megasthenis Asteris
Anastasios Kyrillidis
Sujay Sanghavi
184
13
0
22 Mar 2016
Doubly Random Parallel Stochastic Methods for Large Scale Learning
Doubly Random Parallel Stochastic Methods for Large Scale Learning
Aryan Mokhtari
Alec Koppel
Alejandro Ribeiro
134
15
0
22 Mar 2016
Stochastic Variance Reduction for Nonconvex Optimization
Stochastic Variance Reduction for Nonconvex Optimization
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
385
630
0
19 Mar 2016
Fast Incremental Method for Nonconvex Optimization
Fast Incremental Method for Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
185
45
0
19 Mar 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
553
607
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
330
407
0
17 Mar 2016
Optimal Black-Box Reductions Between Optimization Objectives
Optimal Black-Box Reductions Between Optimization Objectives
Zeyuan Allen-Zhu
Elad Hazan
310
96
0
17 Mar 2016
Distributed Inexact Damped Newton Method: Data Partitioning and
  Load-Balancing
Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing
Chenxin Ma
Martin Takáč
198
10
0
16 Mar 2016
On the Influence of Momentum Acceleration on Online Learning
On the Influence of Momentum Acceleration on Online Learning
Kun Yuan
Bicheng Ying
Ali H. Sayed
273
60
0
14 Mar 2016
Starting Small -- Learning with Adaptive Sample Sizes
Starting Small -- Learning with Adaptive Sample Sizes
Hadi Daneshmand
Aurelien Lucchi
Thomas Hofmann
178
0
0
09 Mar 2016
Stochastic dual averaging methods using variance reduction techniques
  for regularized empirical risk minimization problems
Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
Tomoya Murata
Taiji Suzuki
113
3
0
08 Mar 2016
Without-Replacement Sampling for Stochastic Gradient Methods:
  Convergence Results and Application to Distributed Optimization
Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization
Ohad Shamir
192
33
0
02 Mar 2016
Fast Nonsmooth Regularized Risk Minimization with Continuation
Fast Nonsmooth Regularized Risk Minimization with Continuation
Shuai Zheng
Ruiliang Zhang
James T. Kwok
218
1
0
25 Feb 2016
Second-Order Stochastic Optimization for Machine Learning in Linear Time
Second-Order Stochastic Optimization for Machine Learning in Linear Time
Naman Agarwal
Brian Bullins
Elad Hazan
ODL
431
103
0
12 Feb 2016
A Simple Practical Accelerated Method for Finite Sums
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
287
123
0
08 Feb 2016
Importance Sampling for Minibatches
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
203
128
0
06 Feb 2016
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters
Zeyuan Allen-Zhu
Yang Yuan
Karthik Sridharan
215
29
0
05 Feb 2016
Reducing Runtime by Recycling Samples
Reducing Runtime by Recycling Samples
Jialei Wang
Hai Wang
Nathan Srebro
152
3
0
05 Feb 2016
SDCA without Duality, Regularization, and Individual Convexity
SDCA without Duality, Regularization, and Individual Convexity
Shai Shalev-Shwartz
175
106
0
04 Feb 2016
Adaptive Algorithms for Online Convex Optimization with Long-term
  Constraints
Adaptive Algorithms for Online Convex Optimization with Long-term Constraints
Rodolphe Jenatton
Jim C. Huang
Cédric Archambeau
257
167
0
23 Dec 2015
Distributed Optimization with Arbitrary Local Solvers
Distributed Optimization with Arbitrary Local Solvers
Chenxin Ma
Jakub Konecný
Martin Jaggi
Virginia Smith
Sai Li
Peter Richtárik
Martin Takáč
378
204
0
13 Dec 2015
RSG: Beating Subgradient Method without Smoothness and Strong Convexity
RSG: Beating Subgradient Method without Smoothness and Strong Convexity
Tianbao Yang
Qihang Lin
695
90
0
09 Dec 2015
Efficient Distributed SGD with Variance Reduction
Efficient Distributed SGD with Variance Reduction
Soham De
Tom Goldstein
228
43
0
09 Dec 2015
Variance Reduction for Distributed Stochastic Gradient Descent
Variance Reduction for Distributed Stochastic Gradient Descent
Soham De
Gavin Taylor
Tom Goldstein
173
8
0
05 Dec 2015
Stochastic Parallel Block Coordinate Descent for Large-scale Saddle
  Point Problems
Stochastic Parallel Block Coordinate Descent for Large-scale Saddle Point Problems
Zhanxing Zhu
Amos J. Storkey
ODL
103
7
0
23 Nov 2015
Speed learning on the fly
Speed learning on the fly
Pierre-Yves Massé
Yann Ollivier
158
13
0
08 Nov 2015
Stop Wasting My Gradients: Practical SVRG
Stop Wasting My Gradients: Practical SVRG
Reza Babanezhad
Mohamed Osama Ahmed
Alim Virani
Mark Schmidt
Jakub Konecný
Scott Sallinen
194
139
0
05 Nov 2015
Previous
123...161718
Next