Negative results for approximation using single layer and multilayer
feedforward neural networks
- MLT
Abstract
We prove a negative result for the approximation of functions defined on compact subsets of (where ) using single layer feedforward neural networks with arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions which are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general ) for neural networks with an arbitrary number of layers, for activation functions which are either rational functions, or continuous splines with finitely many pieces.
View on arXivComments on this paper
