ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01447
  4. Cited By
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full
  Gradient Computation
v1v2v3 (latest)

ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation

2 March 2021
Zhize Li
Slavomír Hanzely
Peter Richtárik
ArXiv (abs)PDFHTML

Papers citing "ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation"

12 / 12 papers shown
Title
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
229
0
0
21 Feb 2025
A Coefficient Makes SVRG Effective
A Coefficient Makes SVRG Effective
Yida Yin
Zhiqiu Xu
Zhiyuan Li
Trevor Darrell
Zhuang Liu
131
2
0
09 Nov 2023
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates
  and Practical Features
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov
David Dobre
Gauthier Gidel
121
6
0
23 Apr 2023
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax
  Problems
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems
Hongchang Gao
164
17
0
06 Dec 2022
Versatile Single-Loop Method for Gradient Estimator: First and Second
  Order Optimality, and its Application to Federated Learning
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
Kazusato Oko
Shunta Akiyama
Tomoya Murata
Taiji Suzuki
FedML
120
0
0
01 Sep 2022
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex
  Optimization
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
147
8
0
22 Aug 2022
SPIRAL: A superlinearly convergent incremental proximal algorithm for
  nonconvex finite sum minimization
SPIRAL: A superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimization
Pourya Behmandpoor
P. Latafat
Andreas Themelis
Marc Moonen
Panagiotis Patrinos
129
2
0
17 Jul 2022
Stochastic Gradient Methods with Preconditioned Updates
Stochastic Gradient Methods with Preconditioned Updates
Abdurakhmon Sadiev
Aleksandr Beznosikov
Abdulla Jasem Almansoori
Dmitry Kamzolov
R. Tappenden
Martin Takáč
ODL
129
11
0
01 Jun 2022
Faster Rates for Compressed Federated Learning with Client-Variance
  Reduction
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
FedML
172
16
0
24 Dec 2021
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
129
9
0
26 Nov 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
173
16
0
21 Mar 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
196
139
0
25 Aug 2020
1