6
0

No one-hidden-layer neural network can represent multivariable functions

Abstract

In a function approximation with a neural network, an input dataset is mapped to an output index by optimizing the parameters of each hidden-layer unit. For a unary function, we present constraints on the parameters and its second derivative by constructing a continuum version of a one-hidden-layer neural network with the rectified linear unit (ReLU) activation function. The network is accurately implemented because the constraints decrease the degrees of freedom of the parameters. We also explain the existence of a smooth binary function that cannot be precisely represented by any such neural network.

View on arXiv
Comments on this paper