Expressivity and Approximation Properties of Deep Neural Networks with ReLU Activation

In this paper, we investigate the expressivity and approximation properties of deep neural networks employing the ReLU activation function for . Although deep ReLU networks can approximate polynomials effectively, deep ReLU networks have the capability to represent higher-degree polynomials precisely. Our initial contribution is a comprehensive, constructive proof for polynomial representation using deep ReLU networks. This allows us to establish an upper bound on both the size and count of network parameters. Consequently, we are able to demonstrate a suboptimal approximation rate for functions from Sobolev spaces as well as for analytic functions. Additionally, through an exploration of the representation power of deep ReLU networks for shallow networks, we reveal that deep ReLU networks can approximate functions from a range of variation spaces, extending beyond those generated solely by the ReLU activation function. This finding demonstrates the adaptability of deep ReLU networks in approximating functions within various variation spaces.
View on arXiv