Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.06835
Cited By
Adaptive Gradient Methods Converge Faster with Over-Parameterization (but you should do a line-search)
11 June 2020
Sharan Vaswani
I. Laradji
Frederik Kunstner
S. Meng
Mark W. Schmidt
Simon Lacoste-Julien
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adaptive Gradient Methods Converge Faster with Over-Parameterization (but you should do a line-search)"
7 / 7 papers shown
Title
Convergence Conditions for Stochastic Line Search Based Optimization of Over-parametrized Models
Matteo Lapucci
Davide Pucci
35
1
0
06 Aug 2024
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Aaron Mishkin
Mert Pilanci
Mark Schmidt
62
1
0
03 Apr 2024
Nest Your Adaptive Algorithm for Parameter-Agnostic Nonconvex Minimax Optimization
Junchi Yang
Xiang Li
Niao He
ODL
27
22
0
01 Jun 2022
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
48
11
0
21 Oct 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
13
17
0
18 Feb 2021
A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu
Yura Malitsky
P. Mertikopoulos
V. Cevher
ODL
48
42
0
21 Mar 2020
L4: Practical loss-based stepsize adaptation for deep learning
Michal Rolínek
Georg Martius
ODL
36
63
0
14 Feb 2018
1