Approximation capabilities of neural networks on unbounded domains
Neural Networks (NN), 2019
Abstract
If and if the activation function belongs to a monotone sigmoid, relu, elu, softplus or leaky relu, we prove that neural networks are universal approximators of . This generalizes corresponding universal approximation theorems on Moreover if and if the activation function belongs to a sigmoid, relu, elu, softplus or leaky relu, we show that neural networks never represents non-zero functions in and .
View on arXivComments on this paper
