ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
v1v2v3 (latest)

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Neural Information Processing Systems (NeurIPS), 2014
1 July 2014
Aaron Defazio
Francis R. Bach
Damien Scieur
    ODL
ArXiv (abs)PDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

50 / 878 papers shown
Duality-free Methods for Stochastic Composition Optimization
Duality-free Methods for Stochastic Composition OptimizationIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2017
Liu Liu
Ji Liu
Dacheng Tao
165
16
0
26 Oct 2017
Curvature-aided Incremental Aggregated Gradient Method
Curvature-aided Incremental Aggregated Gradient MethodAllerton Conference on Communication, Control, and Computing (Allerton), 2017
Hoi-To Wai
Wei Shi
A. Nedić
Anna Scaglione
112
11
0
24 Oct 2017
A Novel Stochastic Stratified Average Gradient Method: Convergence Rate
  and Its Complexity
A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity
Aixiang Chen
Bingchuan Chen
Xiaolong Chai
Rui-Ling Bian
Hengguang Li
224
24
0
21 Oct 2017
Tracking the gradients using the Hessian: A new look at variance
  reducing stochastic methods
Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods
Robert Mansel Gower
Nicolas Le Roux
Francis R. Bach
121
32
0
20 Oct 2017
Smooth and Sparse Optimal Transport
Smooth and Sparse Optimal Transport
Mathieu Blondel
Vivien Seguy
Antoine Rolet
OT
261
193
0
17 Oct 2017
DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for
  Asynchronous Distributed Optimization
DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization
Lin Xiao
Adams Wei Yu
Qihang Lin
Weizhu Chen
221
60
0
13 Oct 2017
Sign-Constrained Regularized Loss Minimization
Sign-Constrained Regularized Loss Minimization
Tsuyoshi Kato
Misato Kobayashi
Daisuke Sano
95
0
0
12 Oct 2017
A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary
  Convex Regularizer
A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer
Tianbao Yang
Zhe Li
Lijun Zhang
135
7
0
09 Sep 2017
A Generic Approach for Escaping Saddle points
A Generic Approach for Escaping Saddle points
Sashank J. Reddi
Manzil Zaheer
S. Sra
Barnabás Póczós
Francis R. Bach
Ruslan Salakhutdinov
Alex Smola
230
83
0
05 Sep 2017
Stochastic Gradient Descent: Going As Fast As Possible But Not Faster
Stochastic Gradient Descent: Going As Fast As Possible But Not Faster
Alice Schoenauer Sebag
Marc Schoenauer
Michèle Sebag
123
13
0
05 Sep 2017
A Convergence Analysis for A Class of Practical Variance-Reduction
  Stochastic Gradient MCMC
A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC
Changyou Chen
Wenlin Wang
Yizhe Zhang
Qinliang Su
Lawrence Carin
194
28
0
04 Sep 2017
First-Order Adaptive Sample Size Methods to Reduce Complexity of
  Empirical Risk Minimization
First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization
Aryan Mokhtari
Alejandro Ribeiro
135
20
0
02 Sep 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
326
252
0
29 Aug 2017
An inexact subsampled proximal Newton-type method for large-scale
  machine learning
An inexact subsampled proximal Newton-type method for large-scale machine learning
Xuanqing Liu
Cho-Jui Hsieh
Jason D. Lee
Yuekai Sun
166
15
0
28 Aug 2017
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian
  Information
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
Peng Xu
Farbod Roosta-Khorasani
Michael W. Mahoney
576
220
0
23 Aug 2017
Stochastic Optimization with Bandit Sampling
Stochastic Optimization with Bandit Sampling
Farnood Salehi
L. E. Celis
Patrick Thiran
148
25
0
08 Aug 2017
Variance-Reduced Stochastic Learning by Networked Agents under Random
  Reshuffling
Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling
Kun Yuan
Bicheng Ying
Jiageng Liu
Ali H. Sayed
383
4
0
04 Aug 2017
Variance-Reduced Stochastic Learning under Random Reshuffling
Variance-Reduced Stochastic Learning under Random Reshuffling
Bicheng Ying
Kun Yuan
Ali H. Sayed
199
13
0
04 Aug 2017
A Robust Multi-Batch L-BFGS Method for Machine Learning
A Robust Multi-Batch L-BFGS Method for Machine Learning
A. Berahas
Martin Takáč
AAMLODL
237
47
0
26 Jul 2017
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite
  Optimization
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
Fabian Pedregosa
Rémi Leblond
Damien Scieur
277
34
0
20 Jul 2017
Stochastic Variance Reduction Gradient for a Non-convex Problem Using
  Graduated Optimization
Stochastic Variance Reduction Gradient for a Non-convex Problem Using Graduated Optimization
Li Chen
Shuisheng Zhou
Zhuan Zhang
107
3
0
10 Jul 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
178
38
0
04 Jul 2017
Optimization Methods for Supervised Machine Learning: From Linear Models
  to Deep Learning
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning
Frank E. Curtis
K. Scheinberg
194
48
0
30 Jun 2017
IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
IS-ASGD: Accelerating Asynchronous SGD using Importance Sampling
Fei Wang
Jun Ye
Weichen Li
Guihai Chen
270
1
0
26 Jun 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System
  Theory and Quadratic Constraints
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
211
38
0
25 Jun 2017
Improved Optimization of Finite Sums with Minibatch Stochastic Variance
  Reduced Proximal Iterations
Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations
Jialei Wang
Tong Zhang
253
12
0
21 Jun 2017
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
215
202
0
15 Jun 2017
Deep Adaptive Feature Embedding with Local Sample Distributions for
  Person Re-identification
Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identificationPattern Recognition (PR), 2017
Lin Wu
Yang Wang
Junbin Gao
Xue Li
213
165
0
10 Jun 2017
Limitations on Variance-Reduction and Acceleration Schemes for Finite
  Sum Optimization
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum OptimizationNeural Information Processing Systems (NeurIPS), 2017
Yossi Arjevani
161
12
0
06 Jun 2017
Stochastic Reformulations of Linear Systems: Algorithms and Convergence
  Theory
Stochastic Reformulations of Linear Systems: Algorithms and Convergence TheorySIAM Journal on Matrix Analysis and Applications (SIMAX), 2017
Peter Richtárik
Martin Takáč
286
100
0
04 Jun 2017
Distributed SAGA: Maintaining linear convergence rate with limited
  communication
Distributed SAGA: Maintaining linear convergence rate with limited communication
Clément Calauzènes
Nicolas Le Roux
120
7
0
29 May 2017
Approximate and Stochastic Greedy Optimization
Approximate and Stochastic Greedy Optimization
N. Ye
Peter L. Bartlett
56
0
0
25 May 2017
Convergent Tree Backup and Retrace with Function Approximation
Convergent Tree Backup and Retrace with Function Approximation
Ahmed Touati
Pierre-Luc Bacon
Doina Precup
Pascal Vincent
288
41
0
25 May 2017
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton
  Method
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method
Mark Eisen
Aryan Mokhtari
Alejandro Ribeiro
233
16
0
22 May 2017
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic
  Gradients
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients
Lukas Balles
Philipp Hennig
388
198
0
22 May 2017
An Asynchronous Distributed Framework for Large-scale Learning Based on
  Parameter Exchanges
An Asynchronous Distributed Framework for Large-scale Learning Based on Parameter Exchanges
Bikash Joshi
F. Iutzeler
Massih-Reza Amini
113
1
0
22 May 2017
Parallel Streaming Wasserstein Barycenters
Parallel Streaming Wasserstein Barycenters
Matthew Staib
Sebastian Claici
Justin Solomon
Stefanie Jegelka
239
91
0
21 May 2017
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
175
97
0
20 May 2017
A Unified Framework for Stochastic Matrix Factorization via Variance
  Reduction
A Unified Framework for Stochastic Matrix Factorization via Variance Reduction
Renbo Zhao
W. Haskell
Jiashi Feng
180
0
0
19 May 2017
An Investigation of Newton-Sketch and Subsampled Newton Methods
An Investigation of Newton-Sketch and Subsampled Newton Methods
A. Berahas
Raghu Bollapragada
J. Nocedal
260
118
0
17 May 2017
Sub-sampled Cubic Regularization for Non-convex Optimization
Sub-sampled Cubic Regularization for Non-convex Optimization
Jonas Köhler
Aurelien Lucchi
359
186
0
16 May 2017
Training L1-Regularized Models with Orthant-Wise Passive Descent
  Algorithms
Training L1-Regularized Models with Orthant-Wise Passive Descent Algorithms
Jianqiao Wangni
183
1
0
26 Apr 2017
Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization
Feihu Huang
Songcan Chen
205
2
0
26 Apr 2017
Argument Mining with Structured SVMs and RNNs
Argument Mining with Structured SVMs and RNNs
Vlad Niculae
Joonsuk Park
Claire Cardie
191
115
0
23 Apr 2017
Batch-Expansion Training: An Efficient Optimization Framework
Batch-Expansion Training: An Efficient Optimization Framework
Michal Derezinski
D. Mahajan
S. Keerthi
S.V.N. Vishwanathan
Markus Weimer
174
6
0
22 Apr 2017
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic
  Optimization with Progressive Variance Reduction
Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction
Fanhua Shang
129
1
0
17 Apr 2017
Deep Relaxation: partial differential equations for optimizing deep
  neural networks
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
289
159
0
17 Apr 2017
Stochastic Gradient Descent as Approximate Bayesian Inference
Stochastic Gradient Descent as Approximate Bayesian Inference
Stephan Mandt
Matthew D. Hoffman
David M. Blei
BDL
299
665
0
13 Apr 2017
Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration
  Strategies
Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
Renbo Zhao
W. Haskell
Vincent Y. F. Tan
253
31
0
01 Apr 2017
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
Courtney Paquette
Hongzhou Lin
Dmitriy Drusvyatskiy
Julien Mairal
Zaïd Harchaoui
ODL
221
40
0
31 Mar 2017
Previous
123...1415161718
Next