134

Negative results for approximation using single layer and multilayer feedforward neural networks

Abstract

We prove a negative result for the approximation of functions defined on compact subsets of Rd\mathbb{R}^d (where d2d \geq 2) using single layer feedforward neural networks with arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions which are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general dNd \in \mathbb{N}) for neural networks with an arbitrary number of layers, for activation functions which are either rational functions, or continuous splines with finitely many pieces.

View on arXiv
Comments on this paper