Approximation capabilities of neural networks on unbounded domains
Neural Networks (NN), 2019
Abstract
We prove that if and if the activation function is a monotone sigmoid, relu, elu, softplus or leaky relu, then the shallow neural network is a universal approximator in . This generalizes classical universal approximation theorems on We also prove that if and if the activation function is a sigmoid, relu, elu, softplus or leaky relu, then the shallow neural network expresses no non-zero functions in . Consequently a shallow relu network expresses no non-zero functions in . Some authors, on the other hand, have showed that deep relu network is a universal approximator in . Together we obtained a qualitative viewpoint which justifies the benefit of depth in the context of relu networks.
View on arXivComments on this paper
