10

Complexity of One-Dimensional ReLU DNNs

Jonathan Kogan
Hayden Jananthan
Jeremy Kepner
Main:3 Pages
Bibliography:2 Pages
Abstract

We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. For randomly initialized, fully connected 1D ReLU networks (He scaling with nonzero bias) in the infinite-width limit, we prove that the expected number of linear regions grows as i=1Lni+o(i=1Lni)+1\sum_{i = 1}^L n_i + \mathop{o}\left(\sum_{i = 1}^L{n_i}\right) + 1, where nn_\ell denotes the number of neurons in the \ell-th hidden layer. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance.

View on arXiv
Comments on this paper