On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

This paper explores the expressive power of deep neural networks through the framework of function compositions. We demonstrate that the repeated compositions of a single fixed-size ReLU network exhibit surprising expressive power, despite the limited expressive capabilities of the individual network itself. Specifically, we prove by construction that can approximate -Lipschitz continuous functions on with an error , where is realized by a fixed-size ReLU network, and are two affine linear maps matching the dimensions, and denotes the -times composition of . Furthermore, we extend such a result to generic continuous functions on with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has immense approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network.
View on arXiv