Expressivity of Shallow and Deep Neural Networks for Polynomial
Approximation
Abstract
We analyze the number of neurons that a ReLU neural network needs to approximate multivariate monomials. We establish an exponential lower bound for the complexity of any shallow network that approximates the product function on a general compact domain. Furthermore, we prove that this lower bound does not hold for normalized O(1)-Lipschitz monomials (or equivalently, by restricting to the unit cube). These results suggest shallow ReLU networks suffer from the curse of dimensionality when expressing functions with a Lipschitz parameter scaling with the dimension of the input, and that the expressive power of neural networks lies in their depth rather than the overall complexity.
View on arXivComments on this paper
