ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
v1v2v3 (latest)

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Neural Information Processing Systems (NeurIPS), 2014
1 July 2014
Aaron Defazio
Francis R. Bach
Damien Scieur
    ODL
ArXiv (abs)PDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

50 / 878 papers shown
ErrorCompensatedX: error compensation for variance reduced algorithms
ErrorCompensatedX: error compensation for variance reduced algorithmsNeural Information Processing Systems (NeurIPS), 2021
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
238
11
0
04 Aug 2021
Physics-informed Dyna-Style Model-Based Deep Reinforcement Learning for
  Dynamic Control
Physics-informed Dyna-Style Model-Based Deep Reinforcement Learning for Dynamic ControlProceedings of the Royal Society A (Proc. R. Soc. A), 2021
Xin-Yang Liu
Jian-Xun Wang
AI4CE
280
49
0
31 Jul 2021
Only Train Once: A One-Shot Neural Network Training And Pruning
  Framework
Only Train Once: A One-Shot Neural Network Training And Pruning FrameworkNeural Information Processing Systems (NeurIPS), 2021
Tianyi Chen
Bo Ji
Tianyu Ding
Biyi Fang
Guanyi Wang
Zhihui Zhu
Luming Liang
Yixin Shi
Sheng Yi
Xiao Tu
242
127
0
15 Jul 2021
Distributed stochastic gradient tracking algorithm with variance
  reduction for non-convex optimization
Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimizationIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021
Xia Jiang
Xianlin Zeng
Jian Sun
Jie Chen
141
18
0
28 Jun 2021
Behavior Mimics Distribution: Combining Individual and Group Behaviors
  for Federated Learning
Behavior Mimics Distribution: Combining Individual and Group Behaviors for Federated Learning
Hua Huang
Fanhua Shang
Yuanyuan Liu
Hongying Liu
FedML
120
15
0
23 Jun 2021
Stochastic Polyak Stepsize with a Moving Target
Stochastic Polyak Stepsize with a Moving Target
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
249
18
0
22 Jun 2021
Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Zhiyong Hao
Yixuan Jiang
Huihua Yu
H. Chiang
ODL
109
14
0
22 Jun 2021
Decentralized Constrained Optimization: Double Averaging and Gradient
  Projection
Decentralized Constrained Optimization: Double Averaging and Gradient ProjectionIEEE Conference on Decision and Control (CDC), 2021
Firooz Shahriari-Mehr
David Bosch
Ashkan Panahi
166
7
0
21 Jun 2021
Secure Distributed Training at Scale
Secure Distributed Training at ScaleInternational Conference on Machine Learning (ICML), 2021
Eduard A. Gorbunov
Alexander Borzunov
Michael Diskin
Max Ryabinin
FedML
335
17
0
21 Jun 2021
Memory Augmented Optimizers for Deep Learning
Memory Augmented Optimizers for Deep LearningInternational Conference on Learning Representations (ICLR), 2021
Paul-Aymeric McRae
Prasanna Parthasarathi
Mahmoud Assran
Sarath Chandar
ODL
132
5
0
20 Jun 2021
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive
  Sample Size Approach
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size ApproachNeural Information Processing Systems (NeurIPS), 2021
Qiujiang Jin
Aryan Mokhtari
152
4
0
10 Jun 2021
Unbalanced Optimal Transport through Non-negative Penalized Linear
  Regression
Unbalanced Optimal Transport through Non-negative Penalized Linear RegressionNeural Information Processing Systems (NeurIPS), 2021
Laetitia Chapel
Rémi Flamary
Haoran Wu
Cédric Févotte
Gilles Gasso
OT
186
55
0
08 Jun 2021
Asynchronous Distributed Optimization with Redundancy in Cost Functions
Asynchronous Distributed Optimization with Redundancy in Cost Functions
Shuo Liu
Nirupam Gupta
Nitin H. Vaidya
263
3
0
07 Jun 2021
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
MURANA: A Generic Framework for Stochastic Variance-Reduced OptimizationMathematical and Scientific Machine Learning (MSML), 2021
Laurent Condat
Peter Richtárik
250
21
0
06 Jun 2021
Near Optimal Stochastic Algorithms for Finite-Sum Unbalanced
  Convex-Concave Minimax Optimization
Near Optimal Stochastic Algorithms for Finite-Sum Unbalanced Convex-Concave Minimax Optimization
Luo Luo
Guangzeng Xie
Tong Zhang
Zhihua Zhang
200
20
0
03 Jun 2021
Fit without fear: remarkable mathematical phenomena of deep learning
  through the prism of interpolation
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolationActa Numerica (AN), 2021
M. Belkin
192
207
0
29 May 2021
Practical Schemes for Finding Near-Stationary Points of Convex
  Finite-Sums
Practical Schemes for Finding Near-Stationary Points of Convex Finite-SumsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
218
10
0
25 May 2021
Classifying variety of customer's online engagement for churn prediction with mixed-penalty logistic regression
Petra Posedel vSimović
D. Horvatić
Edward W. Sun
79
1
0
17 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep NetworksInternational Conference on Machine Learning (ICML), 2021
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
213
96
0
11 May 2021
Implicit differentiation for fast hyperparameter selection in non-smooth
  convex learning
Implicit differentiation for fast hyperparameter selection in non-smooth convex learningJournal of machine learning research (JMLR), 2021
Quentin Bertrand
Quentin Klopfenstein
Mathurin Massias
Mathieu Blondel
Samuel Vaiter
Alexandre Gramfort
Joseph Salmon
293
29
0
04 May 2021
GT-STORM: Taming Sample, Communication, and Memory Complexities in
  Decentralized Non-Convex Learning
GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex LearningACM Interational Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), 2021
Xin Zhang
Jia Liu
Zhengyuan Zhu
Elizabeth S. Bentley
229
14
0
04 May 2021
MARL: Multimodal Attentional Representation Learning for Disease
  Prediction
MARL: Multimodal Attentional Representation Learning for Disease PredictionInternational Conference on Virtual Storytelling (ICVS), 2021
Ali Hamdi
Amr Aboeleneen
Khaled Shaban
183
12
0
01 May 2021
Discriminative Bayesian filtering lends momentum to the stochastic
  Newton method for minimizing log-convex functions
Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functionsOptimization Letters (Optim. Lett.), 2021
Michael C. Burkhart
260
0
0
27 Apr 2021
Improved Analysis and Rates for Variance Reduction under
  Without-replacement Sampling Orders
Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders
Xinmeng Huang
Kun Yuan
Xianghui Mao
W. Yin
226
16
0
25 Apr 2021
Random Reshuffling with Variance Reduction: New Analysis and Better
  Rates
Random Reshuffling with Variance Reduction: New Analysis and Better RatesConference on Uncertainty in Artificial Intelligence (UAI), 2021
Grigory Malinovsky
Alibek Sailanbayev
Peter Richtárik
205
25
0
19 Apr 2021
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify
  Communication-Efficient Federated Learning
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated LearningIEEE Transactions on Signal and Information Processing over Networks (TSIPN), 2021
He Zhu
Qing Ling
FedMLAAML
243
20
0
14 Apr 2021
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved
  Complexity
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved ComplexityInternational Conference on Learning Representations (ICLR), 2021
Shaocong Ma
Ziyi Chen
Yi Zhou
Shaofeng Zou
201
12
0
30 Mar 2021
Stochastic Reweighted Gradient Descent
Stochastic Reweighted Gradient DescentInternational Conference on Machine Learning (ICML), 2021
Ayoub El Hanchi
D. Stephens
120
10
0
23 Mar 2021
Adaptive Importance Sampling for Finite-Sum Optimization and Sampling
  with Decreasing Step-Sizes
Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-SizesNeural Information Processing Systems (NeurIPS), 2021
Ayoub El Hanchi
D. Stephens
142
16
0
23 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
263
15
0
21 Mar 2021
Escaping Saddle Points with Stochastically Controlled Stochastic
  Gradient Methods
Escaping Saddle Points with Stochastically Controlled Stochastic Gradient Methods
Guannan Liang
Qianqian Tong
Chunjiang Zhu
J. Bi
215
3
0
07 Mar 2021
A Retrospective Approximation Approach for Smooth Stochastic
  Optimization
A Retrospective Approximation Approach for Smooth Stochastic OptimizationMathematics of Operations Research (MOR), 2021
David Newton
Raghu Bollapragada
R. Pasupathy
N. Yip
280
3
0
07 Mar 2021
Secure Bilevel Asynchronous Vertical Federated Learning with Backward
  Updating
Secure Bilevel Asynchronous Vertical Federated Learning with Backward UpdatingAAAI Conference on Artificial Intelligence (AAAI), 2021
Qingsong Zhang
Bin Gu
Cheng Deng
Heng-Chiao Huang
FedML
116
79
0
01 Mar 2021
Learning with Smooth Hinge Losses
Learning with Smooth Hinge LossesNeurocomputing (Neurocomputing), 2021
Junru Luo
Hong Qiao
Bo Zhang
201
27
0
27 Feb 2021
Variance Reduction via Primal-Dual Accelerated Dual Averaging for
  Nonsmooth Convex Finite-Sums
Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-SumsInternational Conference on Machine Learning (ICML), 2021
Chaobing Song
Stephen J. Wright
Jelena Diakonikolas
264
22
0
26 Feb 2021
A Variance Controlled Stochastic Method with Biased Estimation for
  Faster Non-convex Optimization
A Variance Controlled Stochastic Method with Biased Estimation for Faster Non-convex Optimization
Jia Bi
S. Gunn
130
3
0
19 Feb 2021
Personalized Federated Learning: A Unified Framework and Universal
  Optimization Techniques
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
Filip Hanzely
Boxin Zhao
Mladen Kolar
FedML
359
59
0
19 Feb 2021
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
Zheng Shi
Abdurakhmon Sadiev
Nicolas Loizou
Peter Richtárik
Martin Takávc
ODL
312
16
0
19 Feb 2021
SVRG Meets AdaGrad: Painless Variance Reduction
SVRG Meets AdaGrad: Painless Variance ReductionMachine-mediated learning (ML), 2021
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark Schmidt
Damien Scieur
226
22
0
18 Feb 2021
Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm
Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm
Bin Gu
Guodong Liu
Yanfu Zhang
Xiang Geng
Heng-Chiao Huang
239
22
0
17 Feb 2021
On the Convergence and Sample Efficiency of Variance-Reduced Policy
  Gradient Method
On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient MethodNeural Information Processing Systems (NeurIPS), 2021
Junyu Zhang
Chengzhuo Ni
Zheng Yu
Csaba Szepesvári
Mengdi Wang
345
72
0
17 Feb 2021
Stochastic Variance Reduction for Variational Inequality Methods
Stochastic Variance Reduction for Variational Inequality MethodsAnnual Conference Computational Learning Theory (COLT), 2021
Ahmet Alacaoglu
Yura Malitsky
213
81
0
16 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed CommunicationInternational Conference on Machine Learning (ICML), 2021
Rustem Islamov
Xun Qian
Peter Richtárik
211
55
0
14 Feb 2021
Stochastic Gradient Langevin Dynamics with Variance Reduction
Stochastic Gradient Langevin Dynamics with Variance ReductionIEEE International Joint Conference on Neural Network (IJCNN), 2021
Zhishen Huang
Stephen Becker
217
12
0
12 Feb 2021
An Adaptive Stochastic Sequential Quadratic Programming with
  Differentiable Exact Augmented Lagrangians
An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented LagrangiansMathematical programming (Math. Program.), 2021
Sen Na
M. Anitescu
Mladen Kolar
293
55
0
10 Feb 2021
A New Framework for Variance-Reduced Hamiltonian Monte Carlo
A New Framework for Variance-Reduced Hamiltonian Monte Carlo
Zhengmian Hu
Feihu Huang
Heng-Chiao Huang
116
0
0
09 Feb 2021
DeEPCA: Decentralized Exact PCA with Linear Convergence Rate
DeEPCA: Decentralized Exact PCA with Linear Convergence RateJournal of machine learning research (JMLR), 2021
Haishan Ye
Tong Zhang
152
31
0
08 Feb 2021
Coordinating Momenta for Cross-silo Federated Learning
Coordinating Momenta for Cross-silo Federated LearningAAAI Conference on Artificial Intelligence (AAAI), 2021
An Xu
Heng-Chiao Huang
FedML
148
24
0
08 Feb 2021
Screening for Sparse Online Learning
Screening for Sparse Online LearningJournal of Computational And Graphical Statistics (JCGS), 2021
Jingwei Liang
C. Poon
275
3
0
18 Jan 2021
Urban land-use analysis using proximate sensing imagery: a survey
Urban land-use analysis using proximate sensing imagery: a surveyInternational Journal of Geographical Information Science (IJGIS), 2021
Zhinan Qiao
Xiaohui Yuan
215
21
0
13 Jan 2021
Previous
123...678...161718
Next