
This article conerns the expressive power of depth in neural nets with ReLU activations. We prove that ReLU nets with width can approximate any continuous scalar function on the -dimensional cube arbitrarily well. We obtain quantitative depth estimates for such approximations. Our approach is based on the observation that ReLU nets are particularly well-suited for representing convex functions. Indeed, we give a constructive proof that ReLU nets with width can approximate any continuous convex function of arbitrarily well. Moreover, when approximating convex, piecewise affine functions by width ReLU nets, we obtain matching upper and lower bounds on the required depth, proving that our construction is essentially optimal.
View on arXiv