ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03866
  4. Cited By
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex
  Optimization

Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization

12 February 2018
Zeyuan Allen-Zhu
    ODL
ArXiv (abs)PDFHTML

Papers citing "Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization"

23 / 23 papers shown
Second-order Information Promotes Mini-Batch Robustness in
  Variance-Reduced Gradients
Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients
Sachin Garg
A. Berahas
Michal Dereziñski
225
2
0
23 Apr 2024
Faster Stochastic Algorithms for Minimax Optimization under
  Polyak--Łojasiewicz Conditions
Faster Stochastic Algorithms for Minimax Optimization under Polyak--Łojasiewicz ConditionsNeural Information Processing Systems (NeurIPS), 2023
Le‐Yu Chen
Boyuan Yao
Luo Luo
177
17
0
29 Jul 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and AnalysisNeural Information Processing Systems (NeurIPS), 2023
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
299
15
0
15 Apr 2023
Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
  Minimization
Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum MinimizationNeural Information Processing Systems (NeurIPS), 2022
Ali Kavis
Stratis Skoulakis
Kimon Antonakopoulos
L. Dadi
Volkan Cevher
252
19
0
03 Nov 2022
Distributionally Robust Optimization via Ball Oracle Acceleration
Distributionally Robust Optimization via Ball Oracle AccelerationNeural Information Processing Systems (NeurIPS), 2022
Y. Carmon
Danielle Hausler
161
14
0
24 Mar 2022
Delayed Projection Techniques for Linearly Constrained Problems:
  Convergence Rates, Acceleration, and Applications
Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications
Xiang Li
Zhihua Zhang
131
4
0
05 Jan 2021
Recent Theoretical Advances in Non-Convex Optimization
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
346
104
0
11 Dec 2020
Global Riemannian Acceleration in Hyperbolic and Spherical Spaces
Global Riemannian Acceleration in Hyperbolic and Spherical Spaces
David Martínez-Rubio
557
25
0
07 Dec 2020
Tight Lower Complexity Bounds for Strongly Convex Finite-Sum
  Optimization
Tight Lower Complexity Bounds for Strongly Convex Finite-Sum Optimization
Min Zhang
Yao Shu
Kun He
122
1
0
17 Oct 2020
Boosting First-Order Methods by Shifting Objective: New Schemes with
  Faster Worst-Case Rates
Boosting First-Order Methods by Shifting Objective: New Schemes with Faster Worst-Case RatesNeural Information Processing Systems (NeurIPS), 2020
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
202
5
0
25 May 2020
Multi-consensus Decentralized Accelerated Gradient Descent
Multi-consensus Decentralized Accelerated Gradient DescentJournal of machine learning research (JMLR), 2020
Haishan Ye
Luo Luo
Ziang Zhou
Tong Zhang
167
55
0
02 May 2020
Variance Reduction with Sparse Gradients
Variance Reduction with Sparse GradientsInternational Conference on Learning Representations (ICLR), 2020
Melih Elibol
Lihua Lei
Sai Li
111
24
0
27 Jan 2020
The Practicality of Stochastic Optimization in Imaging Inverse Problems
The Practicality of Stochastic Optimization in Imaging Inverse ProblemsIEEE Transactions on Computational Imaging (TCI), 2019
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
280
33
0
22 Oct 2019
A General Analysis Framework of Lower Complexity Bounds for Finite-Sum
  Optimization
A General Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization
Guangzeng Xie
Luo Luo
Zhihua Zhang
170
4
0
22 Aug 2019
ADASS: Adaptive Sample Selection for Training Acceleration
ADASS: Adaptive Sample Selection for Training Acceleration
Shen-Yi Zhao
Hao Gao
Wu-Jun Li
165
0
0
11 Jun 2019
On the Convergence of Memory-Based Distributed SGD
On the Convergence of Memory-Based Distributed SGD
Shen-Yi Zhao
Hao Gao
Wu-Jun Li
62
1
0
30 May 2019
Solving Empirical Risk Minimization in the Current Matrix Multiplication
  Time
Solving Empirical Risk Minimization in the Current Matrix Multiplication TimeAnnual Conference Computational Learning Theory (COLT), 2019
Y. Lee
Zhao Song
Qiuyi Zhang
247
125
0
11 May 2019
Lower Bounds for Smooth Nonconvex Finite-Sum Optimization
Lower Bounds for Smooth Nonconvex Finite-Sum OptimizationInternational Conference on Machine Learning (ICML), 2019
Dongruo Zhou
Quanquan Gu
190
48
0
31 Jan 2019
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
247
158
0
20 Jun 2018
Neon2: Finding Local Minima via First-Order Oracles
Neon2: Finding Local Minima via First-Order Oracles
Zeyuan Allen-Zhu
Yuanzhi Li
323
138
0
17 Nov 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
314
252
0
29 Aug 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
  Non-Convex Parameter
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex ParameterInternational Conference on Machine Learning (ICML), 2017
Zeyuan Allen-Zhu
466
82
0
02 Feb 2017
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
550
607
0
18 Mar 2016
1