241

Neural network integral representations with the ReLU activation function

Mathematical and Scientific Machine Learning (MSML), 2019
Abstract

We derive a formula for the integral representation of a shallow neural network with the ReLU activation function under the finite L1L_1-norm assumption on the outer weights with respect to Lebesgue measure on the sphere. In the case of univariate target functions, we further provide a closed-form formula for all possible representations. Additionally, in this case, our formula allows one to explicitly solve the least L1L_1-norm neural network representation for a given function.

View on arXiv
Comments on this paper