20
0

Stacey: Promoting Stochastic Steepest Descent via Accelerated p\ell_p-Smooth Nonconvex Optimization

Main:8 Pages
3 Figures
Bibliography:4 Pages
6 Tables
Appendix:5 Pages
Abstract

While popular optimization methods such as SGD, AdamW, and Lion depend on steepest descent updates in either 2\ell_2 or \ell_\infty norms, there remains a critical gap in handling the non-Euclidean structure observed in modern deep networks training. In this work, we address this need by introducing a new accelerated p\ell_p steepest descent algorithm, called Stacey, which uses interpolated primal-dual iterate sequences to effectively navigate non-Euclidean smooth optimization tasks. In addition to providing novel theoretical guarantees for the foundations of our algorithm, we empirically compare our approach against these popular methods on tasks including image classification and language model (LLM) pretraining, demonstrating both faster convergence and higher final accuracy. We further evaluate different values of pp across various models and datasets, underscoring the importance and efficiency of non-Euclidean approaches over standard Euclidean methods. Code can be found atthis https URL.

View on arXiv
@article{luo2025_2506.06606,
  title={ Stacey: Promoting Stochastic Steepest Descent via Accelerated $\ell_p$-Smooth Nonconvex Optimization },
  author={ Xinyu Luo and Cedar Site Bai and Bolian Li and Petros Drineas and Ruqi Zhang and Brian Bullins },
  journal={arXiv preprint arXiv:2506.06606},
  year={ 2025 }
}
Comments on this paper