Harmful Overfitting in Sobolev Spaces
- TDI
Motivated by recent work on benign overfitting in overparameterized machine learning, we study the generalization behavior of functions in Sobolev spaces that perfectly fit a noisy training data set. Under assumptions of label noise and sufficient regularity in the data distribution, we show that approximately norm-minimizing interpolators, which are canonical solutions selected by smoothness bias, exhibit harmful overfitting: even as the training sample size , the generalization error remains bounded below by a positive constant with high probability. Our results hold for arbitrary values of , in contrast to prior results studying the Hilbert space case () using kernel methods. Our proof uses a geometric argument which identifies harmful neighborhoods of the training data using Sobolev inequalities.
View on arXiv