Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime

We study population convergence guarantees of stochastic gradient descent (SGD) for smooth convex objectives in the interpolation regime, where the noise at optimum is zero or near zero. The behavior of the last iterate of SGD in this setting -- particularly with large (constant) stepsizes -- has received growing attention in recent years due to implications for the training of over-parameterized models, as well as to analyzing forgetting in continual learning and to understanding the convergence of the randomized Kaczmarz method for solving linear systems. We establish that after steps of SGD on -smooth convex loss functions with stepsize , the last iterate exhibits expected excess risk , where denotes the variance of the stochastic gradients at the optimum. In particular, for a well-tuned stepsize we obtain a near optimal rate for the last iterate, extending the results of Varre et al. (2021) beyond least squares regression; and when we obtain a rate of with , improving upon the best-known rate recently established by Evron et al. (2025) in the special case of realizable linear regression.
View on arXiv