40
v1v2 (latest)

Benign Overfitting under Learning Rate Conditions for αα Sub-exponential Input

Main:7 Pages
8 Figures
Bibliography:3 Pages
3 Tables
Appendix:19 Pages
Abstract

This paper investigates the phenomenon of benign overfitting in binary classification problems with heavy-tailed input distributions, extending the analysis of maximum margin classifiers to α\alpha sub-exponential distributions (α(0,2]\alpha \in (0, 2]). This generalizes previous work focused on sub-gaussian inputs. We provide generalization error bounds for linear classifiers trained using gradient descent on unregularized logistic loss in this heavy-tailed setting. Our results show that, under certain conditions on the dimensionality pp and the distance between the centers of the distributions, the misclassification error of the maximum margin classifier asymptotically approaches the noise level, the theoretical optimal value. Moreover, we derive an upper bound on the learning rate β\beta for benign overfitting to occur and show that as the tail heaviness of the input distribution α\alpha increases, the upper bound on the learning rate decreases. These results demonstrate that benign overfitting persists even in settings with heavier-tailed inputs than previously studied, contributing to a deeper understanding of the phenomenon in more realistic data environments.

View on arXiv
Comments on this paper