On the Depth of Monotone ReLU Neural Networks and ICNNs

Abstract
We study two models of ReLU neural networks: monotone networks (ReLU) and input convex neural networks (ICNN). Our focus is on expressivity, mostly in terms of depth, and we prove the following lower bounds. For the maximum function MAX computing the maximum of real numbers, we show that ReLU networks cannot compute MAX, or even approximate it. We prove a sharp lower bound on the ICNN depth complexity of MAX. We also prove depth separations between ReLU networks and ICNNs; for every , there is a depth-2 ReLU network of size that cannot be simulated by a depth- ICNN. The proofs are based on deep connections between neural networks and polyhedral geometry, and also use isoperimetric properties of triangulations.
View on arXiv@article{bakaev2025_2505.06169, title={ On the Depth of Monotone ReLU Neural Networks and ICNNs }, author={ Egor Bakaev and Florestan Brunck and Christoph Hertrich and Daniel Reichman and Amir Yehudayoff }, journal={arXiv preprint arXiv:2505.06169}, year={ 2025 } }
Comments on this paper