Complexity of One-Dimensional ReLU DNNs
Jonathan Kogan
Hayden Jananthan
Jeremy Kepner
Main:3 Pages
Bibliography:2 Pages
Abstract
We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. For randomly initialized, fully connected 1D ReLU networks (He scaling with nonzero bias) in the infinite-width limit, we prove that the expected number of linear regions grows as , where denotes the number of neurons in the -th hidden layer. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance.
View on arXivComments on this paper
