Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes

This work studies the (non)robustness of two-layer neural networks in various high-dimensional linearized regimes. We establish fundamental trade-offs between memorization and robustness, as measured by the Sobolev-seminorm of the model w.r.t the data distribution, i.e the square root of the average squared -norm of the gradients of the model w.r.t the its input. More precisely, if is the number of training examples, is the input dimension, and is the number of hidden neurons in a two-layer neural network, we prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded by (i) in case of infinite-width random features (RF) or neural tangent kernel (NTK) with ; (ii) in case of finite-width RF with proportionate scaling of and ; and (iii) in case of finite-width NTK with proportionate scaling of and . Moreover, all of these lower-bounds are tight: they are attained by the min-norm / least-squares interpolator (when , , and are in the appropriate interpolating regime). All our results hold as soon as data is log-concave isotropic, and there is label-noise, i.e the target variable is not a deterministic function of the data / features. We empirically validate our theoretical results with experiments. Accidentally, these experiments also reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.
View on arXiv