11
0

On the Depth of Monotone ReLU Neural Networks and ICNNs

Abstract

We study two models of ReLU neural networks: monotone networks (ReLU+^+) and input convex neural networks (ICNN). Our focus is on expressivity, mostly in terms of depth, and we prove the following lower bounds. For the maximum function MAXn_n computing the maximum of nn real numbers, we show that ReLU+^+ networks cannot compute MAXn_n, or even approximate it. We prove a sharp nn lower bound on the ICNN depth complexity of MAXn_n. We also prove depth separations between ReLU networks and ICNNs; for every kk, there is a depth-2 ReLU network of size O(k2)O(k^2) that cannot be simulated by a depth-kk ICNN. The proofs are based on deep connections between neural networks and polyhedral geometry, and also use isoperimetric properties of triangulations.

View on arXiv
@article{bakaev2025_2505.06169,
  title={ On the Depth of Monotone ReLU Neural Networks and ICNNs },
  author={ Egor Bakaev and Florestan Brunck and Christoph Hertrich and Daniel Reichman and Amir Yehudayoff },
  journal={arXiv preprint arXiv:2505.06169},
  year={ 2025 }
}
Comments on this paper