ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.03183
  4. Cited By
Last iterate convergence of SGD for Least-Squares in the Interpolation
  regime

Last iterate convergence of SGD for Least-Squares in the Interpolation regime

5 February 2021
Aditya Varre
Loucas Pillaud-Vivien
Nicolas Flammarion
ArXivPDFHTML

Papers citing "Last iterate convergence of SGD for Least-Squares in the Interpolation regime"

10 / 10 papers shown
Title
Corner Gradient Descent
Corner Gradient Descent
Dmitry Yarotsky
41
0
0
16 Apr 2025
Better Rates for Random Task Orderings in Continual Linear Models
Better Rates for Random Task Orderings in Continual Linear Models
Itay Evron
Ran Levinstein
Matan Schliserman
Uri Sherman
Tomer Koren
Daniel Soudry
Nathan Srebro
CLL
35
0
0
06 Apr 2025
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Licong Lin
Jingfeng Wu
Sham Kakade
Peter L. Bartlett
Jason D. Lee
LRM
44
15
0
12 Jun 2024
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Aaron Mishkin
Mert Pilanci
Mark Schmidt
64
1
0
03 Apr 2024
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
25
2
0
20 Feb 2023
From high-dimensional & mean-field dynamics to dimensionless ODEs: A
  unifying approach to SGD in two-layers networks
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
Luca Arnaboldi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
MLT
38
31
0
12 Feb 2023
Vector-Valued Least-Squares Regression under Output Regularity
  Assumptions
Vector-Valued Least-Squares Regression under Output Regularity Assumptions
Luc Brogat-Motte
Alessandro Rudi
Céline Brouard
Juho Rousu
Florence dÁlché-Buc
23
6
0
16 Nov 2022
How catastrophic can catastrophic forgetting be in linear regression?
How catastrophic can catastrophic forgetting be in linear regression?
Itay Evron
E. Moroshko
Rachel A. Ward
Nati Srebro
Daniel Soudry
CLL
27
48
0
19 May 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral
  Conditions
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
9
6
0
02 Feb 2022
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
1