ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

1 July 2014
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
    ODL
ArXivPDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

50 / 353 papers shown
Title
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
33
126
0
25 Aug 2020
Solving Stochastic Compositional Optimization is Nearly as Easy as
  Solving Stochastic Optimization
Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
Tianyi Chen
Yuejiao Sun
W. Yin
48
81
0
25 Aug 2020
Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
  Nonlinear TD Learning
Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning
Shuang Qiu
Zhuoran Yang
Xiaohan Wei
Jieping Ye
Zhaoran Wang
33
38
0
23 Aug 2020
Privacy-Preserving Asynchronous Federated Learning Algorithms for
  Multi-Party Vertically Collaborative Learning
Privacy-Preserving Asynchronous Federated Learning Algorithms for Multi-Party Vertically Collaborative Learning
Bin Gu
An Xu
Zhouyuan Huo
Cheng Deng
Heng-Chiao Huang
FedML
38
28
0
14 Aug 2020
A Survey on Large-scale Machine Learning
A Survey on Large-scale Machine Learning
Meng Wang
Weijie Fu
Xiangnan He
Shijie Hao
Xindong Wu
25
110
0
10 Aug 2020
Variance Reduction for Deep Q-Learning using Stochastic Recursive
  Gradient
Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient
Hao Jia
Xiao Zhang
Jun Xu
Wei Zeng
Hao Jiang
Xiao Yan
Ji-Rong Wen
27
3
0
25 Jul 2020
On stochastic mirror descent with interacting particles: convergence
  properties and variance reduction
On stochastic mirror descent with interacting particles: convergence properties and variance reduction
Anastasia Borovykh
N. Kantas
P. Parpas
G. Pavliotis
35
12
0
15 Jul 2020
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
Tyler B. Johnson
Pulkit Agrawal
Haijie Gu
Carlos Guestrin
ODL
30
37
0
09 Jul 2020
Stochastic Hamiltonian Gradient Methods for Smooth Games
Stochastic Hamiltonian Gradient Methods for Smooth Games
Nicolas Loizou
Hugo Berard
Alexia Jolicoeur-Martineau
Pascal Vincent
Simon Lacoste-Julien
Ioannis Mitliagkas
41
50
0
08 Jul 2020
Stochastic Stein Discrepancies
Stochastic Stein Discrepancies
Jackson Gorham
Anant Raj
Lester W. Mackey
32
37
0
06 Jul 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
53
23
0
18 Jun 2020
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Blake E. Woodworth
Kumar Kshitij Patel
Nathan Srebro
FedML
22
199
0
08 Jun 2020
Federated Stochastic Gradient Langevin Dynamics
Federated Stochastic Gradient Langevin Dynamics
Khaoula El Mekkaoui
Diego Mesquita
P. Blomstedt
Samuel Kaski
FedML
37
24
0
23 Apr 2020
On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
  Non-Asymptotic Concentration
On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration
Wenlong Mou
C. J. Li
Martin J. Wainwright
Peter L. Bartlett
Michael I. Jordan
33
75
0
09 Apr 2020
Block Layer Decomposition schemes for training Deep Neural Networks
Block Layer Decomposition schemes for training Deep Neural Networks
L. Palagi
R. Seccia
33
5
0
18 Mar 2020
Adaptive Federated Optimization
Adaptive Federated Optimization
Sashank J. Reddi
Zachary B. Charles
Manzil Zaheer
Zachary Garrett
Keith Rush
Jakub Konecný
Sanjiv Kumar
H. B. McMahan
FedML
58
1,395
0
29 Feb 2020
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for
  Heterogeneous Distributed Datasets
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for Heterogeneous Distributed Datasets
Ilqar Ramazanli
Han Nguyen
Hai Pham
Sashank J. Reddi
Barnabás Póczós
23
11
0
20 Feb 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
39
78
0
19 Feb 2020
A Newton Frank-Wolfe Method for Constrained Self-Concordant Minimization
A Newton Frank-Wolfe Method for Constrained Self-Concordant Minimization
Deyi Liu
V. Cevher
Quoc Tran-Dinh
38
15
0
17 Feb 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic
  Gradient Methods
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
27
4
0
13 Feb 2020
Gradient tracking and variance reduction for decentralized optimization
  and machine learning
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
19
10
0
13 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
40
17
0
11 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
13
180
0
09 Feb 2020
Adaptive Stochastic Optimization
Adaptive Stochastic Optimization
Frank E. Curtis
K. Scheinberg
ODL
19
29
0
18 Jan 2020
Variance Reduced Local SGD with Lower Communication Complexity
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
44
152
0
30 Dec 2019
Federated Variance-Reduced Stochastic Gradient Descent with Robustness
  to Byzantine Attacks
Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Zhaoxian Wu
Qing Ling
Tianyi Chen
G. Giannakis
FedML
AAML
32
181
0
29 Dec 2019
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
30
168
0
19 Dec 2019
Cyanure: An Open-Source Toolbox for Empirical Risk Minimization for
  Python, C++, and soon more
Cyanure: An Open-Source Toolbox for Empirical Risk Minimization for Python, C++, and soon more
Julien Mairal
24
22
0
17 Dec 2019
On the Global Convergence of (Fast) Incremental Expectation Maximization
  Methods
On the Global Convergence of (Fast) Incremental Expectation Maximization Methods
Belhal Karimi
Hoi-To Wai
Eric Moulines
M. Lavielle
32
27
0
28 Oct 2019
Differentiable Convex Optimization Layers
Differentiable Convex Optimization Layers
Akshay Agrawal
Brandon Amos
Shane T. Barratt
Stephen P. Boyd
Steven Diamond
Zico Kolter
50
640
0
28 Oct 2019
Katyusha Acceleration for Convex Finite-Sum Compositional Optimization
Katyusha Acceleration for Convex Finite-Sum Compositional Optimization
Yibo Xu
Yangyang Xu
87
13
0
24 Oct 2019
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
27
30
0
22 Oct 2019
History-Gradient Aided Batch Size Adaptation for Variance Reduced
  Algorithms
History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms
Kaiyi Ji
Zhe Wang
Bowen Weng
Yi Zhou
Wei Zhang
Yingbin Liang
ODL
18
5
0
21 Oct 2019
Aggregated Gradient Langevin Dynamics
Aggregated Gradient Langevin Dynamics
Chao Zhang
Jiahao Xie
Zebang Shen
P. Zhao
Tengfei Zhou
Hui Qian
33
1
0
21 Oct 2019
General Proximal Incremental Aggregated Gradient Algorithms: Better and
  Novel Results under General Scheme
General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme
Tao Sun
Yuejiao Sun
Dongsheng Li
Qing Liao
35
16
0
11 Oct 2019
Variance-Reduced Decentralized Stochastic Optimization with Gradient
  Tracking -- Part II: GT-SVRG
Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking -- Part II: GT-SVRG
Ran Xin
U. Khan
S. Kar
22
8
0
08 Oct 2019
Sample Efficient Policy Gradient Methods with Recursive Variance
  Reduction
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Pan Xu
F. Gao
Quanquan Gu
33
83
0
18 Sep 2019
Trajectory-wise Control Variates for Variance Reduction in Policy
  Gradient Methods
Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods
Ching-An Cheng
Xinyan Yan
Byron Boots
30
22
0
08 Aug 2019
Mix and Match: An Optimistic Tree-Search Approach for Learning Models
  from Mixture Distributions
Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions
Matthew Faw
Rajat Sen
Karthikeyan Shanmugam
Constantine Caramanis
Sanjay Shakkottai
36
3
0
23 Jul 2019
Stochastic Variance Reduced Primal Dual Algorithms for Empirical
  Composition Optimization
Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization
Adithya M. Devraj
Jianshu Chen
30
13
0
22 Jul 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite
  Nonconvex Optimization
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
Learning Activation Functions: A new paradigm for understanding Neural
  Networks
Learning Activation Functions: A new paradigm for understanding Neural Networks
Mohit Goyal
R. Goyal
Brejesh Lall
33
64
0
23 Jun 2019
A Unifying Framework for Variance Reduction Algorithms for Finding
  Zeroes of Monotone Operators
A Unifying Framework for Variance Reduction Algorithms for Finding Zeroes of Monotone Operators
Xun Zhang
W. Haskell
Z. Ye
25
3
0
22 Jun 2019
Reducing the variance in online optimization by transporting past
  gradients
Reducing the variance in online optimization by transporting past gradients
Sébastien M. R. Arnold
Pierre-Antoine Manzagol
Reza Babanezhad
Ioannis Mitliagkas
Nicolas Le Roux
26
28
0
08 Jun 2019
Global Optimality Guarantees For Policy Gradient Methods
Global Optimality Guarantees For Policy Gradient Methods
Jalaj Bhandari
Daniel Russo
39
186
0
05 Jun 2019
Why gradient clipping accelerates training: A theoretical justification
  for adaptivity
Why gradient clipping accelerates training: A theoretical justification for adaptivity
J.N. Zhang
Tianxing He
S. Sra
Ali Jadbabaie
30
446
0
28 May 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
Painless Stochastic Gradient: Interpolation, Line-Search, and
  Convergence Rates
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
Sharan Vaswani
Aaron Mishkin
I. Laradji
Mark Schmidt
Gauthier Gidel
Simon Lacoste-Julien
ODL
50
205
0
24 May 2019
Previous
12345678
Next