Corner Gradient Descent

We consider SGD-type optimization on infinite-dimensional quadratic problems with power law spectral conditions. It is well-known that on such problems deterministic GD has loss convergence rates , which can be improved to by using Heavy Ball with a non-stationary Jacobi-based schedule (and the latter rate is optimal among fixed schedules). However, in the mini-batch Stochastic GD setting, the sampling noise causes the Jacobi HB to diverge; accordingly no algorithm is known. In this paper we show that rates up to can be achieved by a generalized stationary SGD with infinite memory. We start by identifying generalized (S)GD algorithms with contours in the complex plane. We then show that contours that have a corner with external angle accelerate the plain GD rate to . For deterministic GD, increasing allows to achieve rates arbitrarily close to . However, in Stochastic GD, increasing also amplifies the sampling noise, so in general needs to be optimized by balancing the acceleration and noise effects. We prove that the optimal rate is given by , where are the exponents appearing in the capacity and source spectral conditions. Furthermore, using fast rational approximations of the power functions, we show that ideal corner algorithms can be efficiently approximated by finite-memory algorithms, and demonstrate their practical efficiency on a synthetic problem and MNIST.
View on arXiv@article{yarotsky2025_2504.12519, title={ Corner Gradient Descent }, author={ Dmitry Yarotsky }, journal={arXiv preprint arXiv:2504.12519}, year={ 2025 } }