211

Error bounds for deep ReLU networks using the Kolmogorov--Arnold superposition theorem

Neural Networks (NN), 2019
Abstract

We prove a theorem concerning the approximation of multivariate continuous functions by deep ReLU networks, for which the curse of the dimensionality is lessened. Our theorem is based on the Kolmogorov--Arnold superposition theorem, and on the approximation of the inner and outer functions that appear in the superposition by very deep ReLU networks.

View on arXiv
Comments on this paper