ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.01164
  4. Cited By
A Generic Acceleration Framework for Stochastic Composite Optimization

A Generic Acceleration Framework for Stochastic Composite Optimization

3 June 2019
A. Kulunchakov
Julien Mairal
ArXivPDFHTML

Papers citing "A Generic Acceleration Framework for Stochastic Composite Optimization"

12 / 12 papers shown
Title
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
Xianliang Li
Jun Luo
Zhiwei Zheng
Hanxiao Wang
Li Luo
Lingkun Wen
Linlong Wu
Sheng Xu
74
0
0
29 Nov 2024
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
36
2
0
09 Jan 2023
On Acceleration of Gradient-Based Empirical Risk Minimization using
  Local Polynomial Regression
On Acceleration of Gradient-Based Empirical Risk Minimization using Local Polynomial Regression
Ekaterina Trimbach
Edward Duc Hien Nguyen
César A. Uribe
26
1
0
16 Apr 2022
Convergence and Stability of the Stochastic Proximal Point Algorithm
  with Momentum
Convergence and Stability of the Stochastic Proximal Point Algorithm with Momentum
J. Kim
Panos Toulis
Anastasios Kyrillidis
28
8
0
11 Nov 2021
Training Deep Neural Networks with Adaptive Momentum Inspired by the
  Quadratic Optimization
Training Deep Neural Networks with Adaptive Momentum Inspired by the Quadratic Optimization
Tao Sun
Huaming Ling
Zuoqiang Shi
Dongsheng Li
Bao Wang
ODL
22
13
0
18 Oct 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
39
0
0
26 Aug 2020
On the Convergence of Nesterov's Accelerated Gradient Method in
  Stochastic Settings
On the Convergence of Nesterov's Accelerated Gradient Method in Stochastic Settings
Mahmoud Assran
Michael G. Rabbat
14
59
0
27 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
37
17
0
11 Feb 2020
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning
  Rate Procedure For Least Squares
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares
Rong Ge
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
37
149
0
29 Apr 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
32
44
0
25 Jan 2019
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Incremental Majorization-Minimization Optimization with Application to
  Large-Scale Machine Learning
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1