Super-fast rates of convergence for Neural Networks Classifiers under the Hard Margin Condition

Abstract
We study the classical binary classification problem for hypothesis spaces of Deep Neural Networks (DNNs) with ReLU activation under Tsybakov's low-noise condition with exponent , and its limit-case which we refer to as the "hard-margin condition". We show that DNNs which minimize the empirical risk with square loss surrogate and penalty can achieve finite-sample excess risk bounds of order for arbitrarily large under the hard-margin condition, provided that the regression function is sufficiently smooth. The proof relies on a novel decomposition of the excess risk which might be of independent interest.
View on arXiv@article{tepakbong2025_2505.08262, title={ Super-fast rates of convergence for Neural Networks Classifiers under the Hard Margin Condition }, author={ Nathanael Tepakbong and Ding-Xuan Zhou and Xiang Zhou }, journal={arXiv preprint arXiv:2505.08262}, year={ 2025 } }
Comments on this paper