ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11442
  4. Cited By
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic
  Gradient Descent

Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent

21 October 2021
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
ArXivPDFHTML

Papers citing "Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent"

11 / 11 papers shown
Title
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
Xiaochuan Gong
Jie Hao
Mingrui Liu
31
2
0
28 Sep 2024
Enhancing Policy Gradient with the Polyak Step-Size Adaption
Enhancing Policy Gradient with the Polyak Step-Size Adaption
Yunxiang Li
Rui Yuan
Chen Fan
Mark W. Schmidt
Samuel Horváth
Robert Mansel Gower
Martin Takávc
14
0
0
11 Apr 2024
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Aaron Mishkin
Mert Pilanci
Mark Schmidt
51
0
0
03 Apr 2024
(Accelerated) Noise-adaptive Stochastic Heavy-Ball Momentum
(Accelerated) Noise-adaptive Stochastic Heavy-Ball Momentum
Anh Dang
Reza Babanezhad
Sharan Vaswani
17
0
0
12 Jan 2024
Parameter-Agnostic Optimization under Relaxed Smoothness
Parameter-Agnostic Optimization under Relaxed Smoothness
Florian Hübler
Junchi Yang
Xiang Li
Niao He
18
12
0
06 Nov 2023
Two Sides of One Coin: the Limits of Untuned SGD and the Power of
  Adaptive Methods
Two Sides of One Coin: the Limits of Untuned SGD and the Power of Adaptive Methods
Junchi Yang
Xiang Li
Ilyas Fatkhullin
Niao He
29
15
0
21 May 2023
Continuized Acceleration for Quasar Convex Functions in Non-Convex
  Optimization
Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization
Jun-Kun Wang
Andre Wibisono
14
9
0
15 Feb 2023
Target-based Surrogates for Stochastic Optimization
Target-based Surrogates for Stochastic Optimization
J. Lavington
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Nicolas Le Roux
22
5
0
06 Feb 2023
Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe
  Algorithm under Parallelization
Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization
Benjamin Dubois-Taine
Francis R. Bach
Quentin Berthet
Adrien B. Taylor
10
5
0
25 May 2022
Analyzing Monotonic Linear Interpolation in Neural Network Loss
  Landscapes
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
146
28
0
22 Apr 2021
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,194
0
16 Aug 2016
1