135

Some negative results for single layer and multilayer feedforward neural networks

Abstract

We prove, for d2d\geq 2, a negative result for approximation of functions defined con compact subsets of Rd\mathbb{R}^d with single layer feedforward neural networks with arbitrary activation functions. In philosophical terms, this result claims the existence of learning functions f(x)f(x) which are as difficult to approximate with these neural networks as one may want. We also demonstrate an analogous result (for arbitrary dd) for neural networks with an arbitrary number of layers, for some special types of activation functions.

View on arXiv
Comments on this paper