Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.04552
Cited By
v1
v2 (latest)
The Benefits of Implicit Regularization from SGD in Least Squares Problems
10 August 2021
Difan Zou
Jingfeng Wu
Vladimir Braverman
Quanquan Gu
Dean Phillips Foster
Sham Kakade
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Benefits of Implicit Regularization from SGD in Least Squares Problems"
9 / 9 papers shown
Title
Improved Scaling Laws in Linear Regression via Data Reuse
Licong Lin
Jingfeng Wu
Peter Bartlett
22
0
0
10 Jun 2025
Learning Curves of Stochastic Gradient Descent in Kernel Regression
Haihan Zhang
Weicheng Lin
Yuanshi Liu
Cong Fang
31
0
0
28 May 2025
Whoever Started the Interference Should End It: Guiding Data-Free Model Merging via Task Vectors
Runxi Cheng
Feng Xiong
Yongxian Wei
Wanyun Zhu
Chun Yuan
MoMe
133
1
0
11 Mar 2025
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Licong Lin
Jingfeng Wu
Sham Kakade
Peter L. Bartlett
Jason D. Lee
LRM
145
20
0
12 Jun 2024
Losing momentum in continuous-time stochastic optimisation
Kexin Jin
J. Latz
Chenguang Liu
Alessandro Scagliotti
60
2
0
08 Sep 2022
Implicit Regularization with Polynomial Growth in Deep Tensor Factorization
Kais Hariz
Hachem Kadri
Stéphane Ayache
Maher Moakher
Thierry Artières
72
3
0
18 Jul 2022
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent
Zhiyuan Li
Tianhao Wang
Jason D. Lee
Sanjeev Arora
104
29
0
08 Jul 2022
Regularization Guarantees Generalization in Bayesian Reinforcement Learning through Algorithmic Stability
Aviv Tamar
Daniel Soudry
E. Zisselman
OOD
OffRL
39
7
0
24 Sep 2021
Comparing Classes of Estimators: When does Gradient Descent Beat Ridge Regression in Linear Models?
Dominic Richards
Yan Sun
Patrick Rebeschini
72
3
0
26 Aug 2021
1