24
0

Convergence of Clipped-SGD for Convex (L0,L1)(L_0,L_1)-Smooth Optimization with Heavy-Tailed Noise

Main:10 Pages
Bibliography:6 Pages
2 Tables
Appendix:17 Pages
Abstract

Gradient clipping is a widely used technique in Machine Learning and Deep Learning (DL), known for its effectiveness in mitigating the impact of heavy-tailed noise, which frequently arises in the training of large language models. Additionally, first-order methods with clipping, such as Clip-SGD, exhibit stronger convergence guarantees than SGD under the (L0,L1)(L_0,L_1)-smoothness assumption, a property observed in many DL tasks. However, the high-probability convergence of Clip-SGD under both assumptions -- heavy-tailed noise and (L0,L1)(L_0,L_1)-smoothness -- has not been fully addressed in the literature. In this paper, we bridge this critical gap by establishing the first high-probability convergence bounds for Clip-SGD applied to convex (L0,L1)(L_0,L_1)-smooth optimization with heavy-tailed noise. Our analysis extends prior results by recovering known bounds for the deterministic case and the stochastic setting with L1=0L_1 = 0 as special cases. Notably, our rates avoid exponentially large factors and do not rely on restrictive sub-Gaussian noise assumptions, significantly broadening the applicability of gradient clipping.

View on arXiv
@article{chezhegov2025_2505.20817,
  title={ Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise },
  author={ Savelii Chezhegov and Aleksandr Beznosikov and Samuel Horváth and Eduard Gorbunov },
  journal={arXiv preprint arXiv:2505.20817},
  year={ 2025 }
}
Comments on this paper