17
57

A law of robustness for two-layers neural networks

Abstract

We initiate the study of the inherent tradeoffs between the size of a neural network and its robustness, as measured by its Lipschitz constant. We make a precise conjecture that, for any Lipschitz activation function and for most datasets, any two-layers neural network with kk neurons that perfectly fit the data must have its Lipschitz constant larger (up to a constant) than n/k\sqrt{n/k} where nn is the number of datapoints. In particular, this conjecture implies that overparametrization is necessary for robustness, since it means that one needs roughly one neuron per datapoint to ensure a O(1)O(1)-Lipschitz network, while mere data fitting of dd-dimensional data requires only one neuron per dd datapoints. We prove a weaker version of this conjecture when the Lipschitz constant is replaced by an upper bound on it based on the spectral norm of the weight matrix. We also prove the conjecture in the high-dimensional regime ndn \approx d (which we also refer to as the undercomplete case, since only kdk \leq d is relevant here). Finally we prove the conjecture for polynomial activation functions of degree pp when ndpn \approx d^p. We complement these findings with experimental evidence supporting the conjecture.

View on arXiv
Comments on this paper