ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.0202
  4. Cited By
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
v1v2v3 (latest)

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Neural Information Processing Systems (NeurIPS), 2014
1 July 2014
Aaron Defazio
Francis R. Bach
Damien Scieur
    ODL
ArXiv (abs)PDFHTML

Papers citing "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives"

50 / 878 papers shown
A Dual Accelerated Method for Online Stochastic Distributed Averaging:
  From Consensus to Decentralized Policy Evaluation
A Dual Accelerated Method for Online Stochastic Distributed Averaging: From Consensus to Decentralized Policy EvaluationIEEE Transactions on Automatic Control (TAC), 2022
Sheng Zhang
A. Pananjady
Justin Romberg
OffRL
266
4
0
23 Jul 2022
Riemannian Stochastic Gradient Method for Nested Composition
  Optimization
Riemannian Stochastic Gradient Method for Nested Composition OptimizationIEEE Conference on Decision and Control (CDC), 2022
Dewei Zhang
S. Tajbakhsh
236
1
0
19 Jul 2022
Multi-block-Single-probe Variance Reduced Estimator for Coupled
  Compositional Optimization
Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional OptimizationNeural Information Processing Systems (NeurIPS), 2022
Wei Jiang
Gang Li
Yibo Wang
Lijun Zhang
Tianbao Yang
308
18
0
18 Jul 2022
SPIRAL: A superlinearly convergent incremental proximal algorithm for
  nonconvex finite sum minimization
SPIRAL: A superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimizationComputational optimization and applications (Comput. Optim. Appl.), 2022
Pourya Behmandpoor
P. Latafat
Andreas Themelis
Marc Moonen
Panagiotis Patrinos
219
2
0
17 Jul 2022
Adaptive Sketches for Robust Regression with Importance Sampling
Adaptive Sketches for Robust Regression with Importance SamplingInternational Workshop and International Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM), 2022
S. Mahabadi
David P. Woodruff
Samson Zhou
151
6
0
16 Jul 2022
TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
  Kernels
TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent KernelsNeural Information Processing Systems (NeurIPS), 2022
Yaodong Yu
Alexander Wei
Sai Praneeth Karimireddy
Yi-An Ma
Michael I. Jordan
FedML
257
33
0
13 Jul 2022
Variance Reduced ProxSkip: Algorithm, Theory and Application to
  Federated Learning
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated LearningNeural Information Processing Systems (NeurIPS), 2022
Grigory Malinovsky
Kai Yi
Peter Richtárik
FedML
290
42
0
09 Jul 2022
Tackling Data Heterogeneity: A New Unified Framework for Decentralized
  SGD with Sample-induced Topology
Tackling Data Heterogeneity: A New Unified Framework for Decentralized SGD with Sample-induced TopologyInternational Conference on Machine Learning (ICML), 2022
Yan Huang
Ying Sun
Zehan Zhu
Changzhi Yan
Jinming Xu
FedML
184
18
0
08 Jul 2022
Benchopt: Reproducible, efficient and collaborative optimization
  benchmarks
Benchopt: Reproducible, efficient and collaborative optimization benchmarksNeural Information Processing Systems (NeurIPS), 2022
Thomas Moreau
Mathurin Massias
Alexandre Gramfort
Pierre Ablin
Pierre-Antoine Bannier Benjamin Charlier
...
Binh Duc Nguyen
A. Rakotomamonjy
Zaccharie Ramzi
Joseph Salmon
Samuel Vaiter
289
48
0
27 Jun 2022
Finding Optimal Policy for Queueing Models: New Parameterization
Finding Optimal Policy for Queueing Models: New Parameterization
Trang H. Tran
Lam M. Nguyen
K. Scheinberg
OffRL
128
2
0
21 Jun 2022
SoteriaFL: A Unified Framework for Private Federated Learning with
  Communication Compression
SoteriaFL: A Unified Framework for Private Federated Learning with Communication CompressionNeural Information Processing Systems (NeurIPS), 2022
Zhize Li
Haoyu Zhao
Boyue Li
Yuejie Chi
FedML
244
49
0
20 Jun 2022
MF-OMO: An Optimization Formulation of Mean-Field Games
MF-OMO: An Optimization Formulation of Mean-Field GamesSIAM Journal of Control and Optimization (SICON), 2022
Xin Guo
Anran Hu
Junzi Zhang
272
18
0
20 Jun 2022
Stability and Generalization of Stochastic Optimization with Nonconvex
  and Nonsmooth Problems
Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth ProblemsAnnual Conference Computational Learning Theory (COLT), 2022
Yunwen Lei
257
23
0
14 Jun 2022
Anchor Sampling for Federated Learning with Partial Client Participation
Anchor Sampling for Federated Learning with Partial Client ParticipationInternational Conference on Machine Learning (ICML), 2022
Feijie Wu
Song Guo
Zhihao Qu
Shiqi He
Ziming Liu
Jing Gao
FedML
228
24
0
13 Jun 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient
  Algorithms
On the Convergence to a Global Solution of Shuffling-Type Gradient AlgorithmsNeural Information Processing Systems (NeurIPS), 2022
Lam M. Nguyen
Trang H. Tran
247
3
0
13 Jun 2022
Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in
  Federated Learning
Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated LearningInternational Conference on Internet-of-Things Design and Implementation (IoTDI), 2022
Shenghui Li
Edith C.H. Ngai
Fanghua Ye
Li Ju
Tianru Zhang
Thiemo Voigt
AAMLFedML
340
16
0
10 Jun 2022
Push--Pull with Device Sampling
Push--Pull with Device SamplingIEEE Transactions on Automatic Control (TAC), 2022
Yu-Guan Hsieh
Yassine Laguel
F. Iutzeler
J. Malick
166
2
0
08 Jun 2022
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
327
11
0
06 Jun 2022
Sharper Rates and Flexible Framework for Nonconvex SGD with Client and
  Data Sampling
Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling
Alexander Tyurin
Lukang Sun
Konstantin Burlachenko
Peter Richtárik
136
9
0
05 Jun 2022
Federated Adversarial Training with Transformers
Federated Adversarial Training with Transformers
Ahmed Aldahdooh
W. Hamidouche
Olivier Déforges
FedMLViT
220
2
0
05 Jun 2022
A PDE-based Explanation of Extreme Numerical Sensitivities and Edge of
  Stability in Training Neural Networks
A PDE-based Explanation of Extreme Numerical Sensitivities and Edge of Stability in Training Neural NetworksJournal of machine learning research (JMLR), 2022
Yuxin Sun
Dong Lao
G. Sundaramoorthi
A. Yezzi
411
2
0
04 Jun 2022
From $t$-SNE to UMAP with contrastive learning
From ttt-SNE to UMAP with contrastive learningInternational Conference on Learning Representations (ICLR), 2022
Sebastian Damrich
Jan Niklas Böhm
Fred Hamprecht
D. Kobak
SSL
345
29
0
03 Jun 2022
Walk for Learning: A Random Walk Approach for Federated Learning from
  Heterogeneous Data
Walk for Learning: A Random Walk Approach for Federated Learning from Heterogeneous DataIEEE Journal on Selected Areas in Communications (JSAC), 2022
Ghadir Ayache
Venkat Dassari
S. E. Rouayheb
FedML
140
30
0
01 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker
  Assumptions and Communication Compression as a Cherry on the Top
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
287
0
0
01 Jun 2022
Stochastic Gradient Methods with Preconditioned Updates
Stochastic Gradient Methods with Preconditioned UpdatesJournal of Optimization Theory and Applications (JOTA), 2022
Abdurakhmon Sadiev
Aleksandr Beznosikov
Abdulla Jasem Almansoori
Dmitry Kamzolov
R. Tappenden
Martin Takáč
ODL
269
12
0
01 Jun 2022
A principled framework for the design and analysis of token algorithms
A principled framework for the design and analysis of token algorithmsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Aymeric Dieuleveut
FedML
222
17
0
30 May 2022
Confederated Learning: Federated Learning with Decentralized Edge
  Servers
Confederated Learning: Federated Learning with Decentralized Edge ServersIEEE Transactions on Signal Processing (IEEE Trans. Signal Process.), 2022
Bin Wang
Jun Fang
Hongbin Li
Xiaojun Yuan
Qing Ling
FedML
230
31
0
30 May 2022
Stochastic Gradient Methods with Compressed Communication for
  Decentralized Saddle Point Problems
Stochastic Gradient Methods with Compressed Communication for Decentralized Saddle Point Problems
Chhavi Sharma
Vishnu Narayanan
P. Balamurugan
159
2
0
28 May 2022
Theoretical Analysis of Primal-Dual Algorithm for Non-Convex Stochastic
  Decentralized Optimization
Theoretical Analysis of Primal-Dual Algorithm for Non-Convex Stochastic Decentralized Optimization
Yuki Takezawa
Kenta Niwa
M. Yamada
198
4
0
23 May 2022
SADAM: Stochastic Adam, A Stochastic Operator for First-Order
  Gradient-based Optimizer
SADAM: Stochastic Adam, A Stochastic Operator for First-Order Gradient-based Optimizer
Wei Zhang
Yun-Jian Bao
ODL
197
2
0
20 May 2022
On the efficiency of Stochastic Quasi-Newton Methods for Deep Learning
On the efficiency of Stochastic Quasi-Newton Methods for Deep Learning
M. Yousefi
Angeles Martinez
ODL
128
1
0
18 May 2022
Federated Random Reshuffling with Compression and Variance Reduction
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
300
12
0
08 May 2022
Communication Compression for Decentralized Learning with Operator
  Splitting Methods
Communication Compression for Decentralized Learning with Operator Splitting MethodsIEEE Transactions on Signal and Information Processing over Networks (TSIPN), 2022
Yuki Takezawa
Kenta Niwa
M. Yamada
206
3
0
08 May 2022
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Djamila Bouhata
Hamouma Moumen
Moumen Hamouma
Ahcène Bounceur
AI4CE
301
9
0
05 May 2022
An Adaptive Incremental Gradient Method With Support for Non-Euclidean
  Norms
An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms
Binghui Xie
Chen Jin
Kaiwen Zhou
James Cheng
Wei Meng
183
1
0
28 Apr 2022
Neighbor-Based Optimized Logistic Regression Machine Learning Model For
  Electric Vehicle Occupancy Detection
Neighbor-Based Optimized Logistic Regression Machine Learning Model For Electric Vehicle Occupancy Detection
S. Shaw
Keaton Chia
J. Kleissl
51
1
0
28 Apr 2022
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Samuel Horváth
Maziar Sanjabi
Lin Xiao
Peter Richtárik
Michael G. Rabbat
FedML
283
22
0
27 Apr 2022
FedCau: A Proactive Stop Policy for Communication and Computation
  Efficient Federated Learning
FedCau: A Proactive Stop Policy for Communication and Computation Efficient Federated LearningIEEE Transactions on Wireless Communications (TWC), 2022
Afsaneh Mahmoudi
H. S. Ghadikolaei
José Hélio da Cruz Júnior
Carlo Fischione
108
11
0
16 Apr 2022
A Semismooth Newton Stochastic Proximal Point Algorithm with Variance
  Reduction
A Semismooth Newton Stochastic Proximal Point Algorithm with Variance ReductionSIAM Journal on Optimization (SIAM J. Optim.), 2022
Andre Milzarek
Fabian Schaipp
M. Ulbrich
228
8
0
01 Apr 2022
An Adaptive Gradient Method with Energy and Momentum
An Adaptive Gradient Method with Energy and MomentumAnnals of Applied Mathematics (AAM), 2022
Hailiang Liu
Xuping Tian
ODL
158
10
0
23 Mar 2022
Closing the Generalization Gap of Cross-silo Federated Medical Image
  Segmentation
Closing the Generalization Gap of Cross-silo Federated Medical Image SegmentationComputer Vision and Pattern Recognition (CVPR), 2022
An Xu
Wenqi Li
Pengfei Guo
Dong Yang
H. Roth
Ali Hatamizadeh
Can Zhao
Daguang Xu
Heng-Chiao Huang
Ziyue Xu
FedML
193
66
0
18 Mar 2022
Learning Distributionally Robust Models at Scale via Composite
  Optimization
Learning Distributionally Robust Models at Scale via Composite OptimizationInternational Conference on Learning Representations (ICLR), 2022
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
Amin Karbasi
OOD
165
5
0
17 Mar 2022
Stochastic Halpern Iteration with Variance Reduction for Stochastic
  Monotone Inclusions
Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone InclusionsNeural Information Processing Systems (NeurIPS), 2022
Xu Cai
Chaobing Song
Cristóbal Guzmán
Jelena Diakonikolas
311
14
0
17 Mar 2022
Don't fear the unlabelled: safe semi-supervised learning via simple
  debiasing
Don't fear the unlabelled: safe semi-supervised learning via simple debiasingInternational Conference on Learning Representations (ICLR), 2022
Hugo Schmutz
O. Humbert
Pierre-Alexandre Mattei
280
14
0
14 Mar 2022
Accelerating Plug-and-Play Image Reconstruction via Multi-Stage Sketched
  Gradients
Accelerating Plug-and-Play Image Reconstruction via Multi-Stage Sketched Gradients
Junqi Tang
182
2
0
14 Mar 2022
Fast Gradient Methods for Data-Consistent Local Super-Resolution of Medical Images
Fast Gradient Methods for Data-Consistent Local Super-Resolution of Medical Images
Junqi Tang
Guixian Xu
Jinglai Li
SupR
370
0
0
22 Feb 2022
MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an
  Exponential Convergence Rate
MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate
Aixiang Chen
Chen
Jinting Zhang
Zanbo Zhang
Zhihong Li
186
0
0
21 Feb 2022
Policy Learning and Evaluation with Randomized Quasi-Monte Carlo
Policy Learning and Evaluation with Randomized Quasi-Monte CarloInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Sébastien M. R. Arnold
P. LÉcuyer
Liyu Chen
Yi-fan Chen
Fei Sha
OffRL
180
4
0
16 Feb 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient MethodsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
332
58
0
15 Feb 2022
Equivariance Regularization for Image Reconstruction
Equivariance Regularization for Image Reconstruction
Junqi Tang
201
3
0
10 Feb 2022
Previous
123456...161718
Next