ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.11851
  4. Cited By
Stochastic Polyak Stepsize with a Moving Target

Stochastic Polyak Stepsize with a Moving Target

22 June 2021
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
ArXivPDFHTML

Papers citing "Stochastic Polyak Stepsize with a Moving Target"

14 / 14 papers shown
Title
Temporal Context Consistency Above All: Enhancing Long-Term Anticipation
  by Learning and Enforcing Temporal Constraints
Temporal Context Consistency Above All: Enhancing Long-Term Anticipation by Learning and Enforcing Temporal Constraints
Alberto Maté
Mariella Dimiccoli
AI4TS
26
0
0
27 Dec 2024
The High Line: Exact Risk and Learning Rate Curves of Stochastic
  Adaptive Learning Rate Algorithms
The High Line: Exact Risk and Learning Rate Curves of Stochastic Adaptive Learning Rate Algorithms
Elizabeth Collins-Woodfin
Inbar Seroussi
Begona García Malaxechebarría
Andrew W. Mackenzie
Elliot Paquette
Courtney Paquette
18
1
0
30 May 2024
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury
N. Tupitsa
Nicolas Loizou
Samuel Horváth
Martin Takáč
Eduard A. Gorbunov
25
1
0
05 Mar 2024
AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size
AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size
P. Ostroukhov
Aigerim Zhumabayeva
Chulu Xiang
Alexander Gasnikov
Martin Takáč
Dmitry Kamzolov
ODL
33
2
0
07 Feb 2024
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant
  Stochastic Algorithms
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms
Farshed Abdukhakimov
Chulu Xiang
Dmitry Kamzolov
Robert Mansel Gower
Martin Takáč
27
2
0
28 Dec 2023
Adaptive SGD with Polyak stepsize and Line-search: Robust Convergence
  and Variance Reduction
Adaptive SGD with Polyak stepsize and Line-search: Robust Convergence and Variance Reduction
Xiao-Yan Jiang
Sebastian U. Stich
17
18
0
11 Aug 2023
Function Value Learning: Adaptive Learning Rates Based on the Polyak
  Stepsize and Function Splitting in ERM
Function Value Learning: Adaptive Learning Rates Based on the Polyak Stepsize and Function Splitting in ERM
Guillaume Garrigos
Robert Mansel Gower
Fabian Schaipp
8
5
0
26 Jul 2023
Variational Inference with Gaussian Score Matching
Variational Inference with Gaussian Score Matching
Chirag Modi
C. Margossian
Yuling Yao
Robert Mansel Gower
David M. Blei
Lawrence K. Saul
11
12
0
15 Jul 2023
Locally Adaptive Federated Learning
Locally Adaptive Federated Learning
Sohom Mukherjee
Nicolas Loizou
Sebastian U. Stich
FedML
19
3
0
12 Jul 2023
Don't be so Monotone: Relaxing Stochastic Line Search in
  Over-Parameterized Models
Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models
Leonardo Galli
Holger Rauhut
Mark W. Schmidt
11
11
0
22 Jun 2023
Prodigy: An Expeditiously Adaptive Parameter-Free Learner
Prodigy: An Expeditiously Adaptive Parameter-Free Learner
Konstantin Mishchenko
Aaron Defazio
ODL
17
54
0
09 Jun 2023
Layer-wise Adaptive Step-Sizes for Stochastic First-Order Methods for Deep Learning
Achraf Bahamou
D. Goldfarb
ODL
20
0
0
23 May 2023
Adaptive Learning Rates for Faster Stochastic Gradient Methods
Adaptive Learning Rates for Faster Stochastic Gradient Methods
Samuel Horváth
Konstantin Mishchenko
Peter Richtárik
ODL
17
7
0
10 Aug 2022
SP2: A Second Order Stochastic Polyak Method
SP2: A Second Order Stochastic Polyak Method
Shuang Li
W. Swartworth
Martin Takávc
Deanna Needell
Robert Mansel Gower
15
13
0
17 Jul 2022
1