21
4

On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

Abstract

This paper explores the expressive power of deep neural networks through the framework of function compositions. We demonstrate that the repeated compositions of a single fixed-size ReLU network exhibit surprising expressive power, despite the limited expressive capabilities of the individual network itself. Specifically, we prove by construction that L2grL1\mathcal{L}_2\circ \boldsymbol{g}^{\circ r}\circ \boldsymbol{\mathcal{L}}_1 can approximate 11-Lipschitz continuous functions on [0,1]d[0,1]^d with an error O(r1/d)\mathcal{O}(r^{-1/d}), where g\boldsymbol{g} is realized by a fixed-size ReLU network, L1\boldsymbol{\mathcal{L}}_1 and L2\mathcal{L}_2 are two affine linear maps matching the dimensions, and gr\boldsymbol{g}^{\circ r} denotes the rr-times composition of g\boldsymbol{g}. Furthermore, we extend such a result to generic continuous functions on [0,1]d[0,1]^d with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has immense approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network.

View on arXiv
Comments on this paper