ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01491
30
633

Understanding Deep Neural Networks with Rectified Linear Units

4 November 2016
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
    PINN
ArXivPDFHTML
Abstract

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to *global optimality* with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number kkk there exists a function representable by a ReLU DNN with k2k^2k2 hidden layers and total size k3k^3k3, such that any ReLU DNN with at most kkk hidden layers will require at least 12kk+1−1\frac{1}{2}k^{k+1}-121​kk+1−1 total nodes. Finally, for the family of Rn→R\mathbb{R}^n\to \mathbb{R}Rn→R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a *smoothly parameterized* family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory.

View on arXiv
Comments on this paper