330

Approximation capabilities of neural networks on unbounded domains

Neural Networks (NN), 2019
Abstract

If p(1,)p \in (1, \infty) and if the activation function belongs to a monotone sigmoid, relu, elu, softplus or leaky relu, we prove that neural networks are universal approximators of Lp(R×[0,1]n)L^{p}(\mathbb{R} \times [0, 1]^n). This generalizes corresponding universal approximation theorems on [0,1]n.[0,1]^n. Moreover if p(1,)p \in (1, \infty) and if the activation function belongs to a sigmoid, relu, elu, softplus or leaky relu, we show that neural networks never represents non-zero functions in Lp(R×R+)L^{p}(\mathbb{R} \times \mathbb{R}^+) and Lp(R2)L^{p}(\mathbb{R}^2).

View on arXiv
Comments on this paper