17
9

Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes

Abstract

This work studies the (non)robustness of two-layer neural networks in various high-dimensional linearized regimes. We establish fundamental trade-offs between memorization and robustness, as measured by the Sobolev-seminorm of the model w.r.t the data distribution, i.e the square root of the average squared L2L_2-norm of the gradients of the model w.r.t the its input. More precisely, if nn is the number of training examples, dd is the input dimension, and kk is the number of hidden neurons in a two-layer neural network, we prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded by (i) n\sqrt{n} in case of infinite-width random features (RF) or neural tangent kernel (NTK) with dnd \gtrsim n; (ii) n\sqrt{n} in case of finite-width RF with proportionate scaling of dd and kk; and (iii) n/k\sqrt{n/k} in case of finite-width NTK with proportionate scaling of dd and kk. Moreover, all of these lower-bounds are tight: they are attained by the min-norm / least-squares interpolator (when nn, dd, and kk are in the appropriate interpolating regime). All our results hold as soon as data is log-concave isotropic, and there is label-noise, i.e the target variable is not a deterministic function of the data / features. We empirically validate our theoretical results with experiments. Accidentally, these experiments also reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.

View on arXiv
Comments on this paper