16

A Depth Hierarchy for Computing the Maximum in ReLU Networks via Extremal Graph Theory

Itay Safran
Main:8 Pages
4 Figures
Bibliography:3 Pages
Appendix:10 Pages
Abstract

We consider the problem of exact computation of the maximum function over dd real inputs using ReLU neural networks. We prove a depth hierarchy, wherein width Ω(d1+12k21)\Omega\big(d^{1+\frac{1}{2^{k-2}-1}}\big) is necessary to represent the maximum for any depth 3klog2(log2(d))3\le k\le \log_2(\log_2(d)). This is the first unconditional super-linear lower bound for this fundamental operator at depths k3k\ge3, and it holds even if the depth scales with dd. Our proof technique is based on a combinatorial argument and associates the non-differentiable ridges of the maximum with cliques in a graph induced by the first hidden layer of the computing network, utilizing Turán's theorem from extremal graph theory to show that a sufficiently narrow network cannot capture the non-linearities of the maximum. This suggests that despite its simple nature, the maximum function possesses an inherent complexity that stems from the geometric structure of its non-differentiable hyperplanes, and provides a novel approach for proving lower bounds for deep neural networks.

View on arXiv
Comments on this paper