328

Approximation capabilities of neural networks on unbounded domains

Neural Networks (NN), 2019
Abstract

We prove that if p[2,)p \in [2, \infty) and if the activation function is a monotone sigmoid, relu, elu, softplus or leaky relu, then the shallow neural network is a universal approximator in Lp(R×[0,1]n)L^{p}(\mathbb{R} \times [0, 1]^n). This generalizes classical universal approximation theorems on [0,1]n.[0,1]^n. We also prove that if p[1,)p \in [1, \infty) and if the activation function is a sigmoid, relu, elu, softplus or leaky relu, then the shallow neural network expresses no non-zero functions in Lp(R×R+)L^{p}(\mathbb{R} \times \mathbb{R}^+). Consequently a shallow relu network expresses no non-zero functions in Lp(Rn)(n2)L^{p}(\mathbb{R}^n)(n \ge 2). Some authors, on the other hand, have showed that deep relu network is a universal approximator in Lp(Rn)L^{p}(\mathbb{R}^n). Together we obtained a qualitative viewpoint which justifies the benefit of depth in the context of relu networks.

View on arXiv
Comments on this paper