Optimal Rates for Robust Stochastic Convex Optimization

Machine learning algorithms in high-dimensional settings are highly susceptible to the influence of even a small fraction of structured outliers, making robust optimization techniques essential. In particular, within the -contamination model, where an adversary can inspect and replace up to an -fraction of the samples, a fundamental open problem is determining the optimal rates for robust stochastic convex optimization (SCO) under such contamination. We develop novel algorithms that achieve minimax-optimal excess risk (up to logarithmic factors) under the -contamination model. Our approach improves over existing algorithms, which are not only suboptimal but also require stringent assumptions, including Lipschitz continuity and smoothness of individual sample functions. By contrast, our optimal algorithms do not require these stringent assumptions, assuming only population-level smoothness of the loss. Moreover, our algorithms can be adapted to handle the case in which the covariance parameter is unknown, and can be extended to nonsmooth population risks via convolutional smoothing. We complement our algorithmic developments with a tight information-theoretic lower bound for robust SCO.
View on arXiv@article{gao2025_2412.11003, title={ Optimal Rates for Robust Stochastic Convex Optimization }, author={ Changyu Gao and Andrew Lowy and Xingyu Zhou and Stephen J. Wright }, journal={arXiv preprint arXiv:2412.11003}, year={ 2025 } }