ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.02691
346
375
v1v2v3 (latest)

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

9 August 2017
Boris Hanin
ArXiv (abs)PDFHTML
Abstract

This article conerns the expressive power of depth in neural nets with ReLU activations. We prove that ReLU nets with width 2d+22d+22d+2 can approximate any continuous scalar function on the ddd-dimensional cube [0,1]d[0,1]^d[0,1]d arbitrarily well. We obtain quantitative depth estimates for such approximations. Our approach is based on the observation that ReLU nets are particularly well-suited for representing convex functions. Indeed, we give a constructive proof that ReLU nets with width d+1d+1d+1 can approximate any continuous convex function of ddd arbitrarily well. Moreover, when approximating convex, piecewise affine functions by width d+1d+1d+1 ReLU nets, we obtain matching upper and lower bounds on the required depth, proving that our construction is essentially optimal.

View on arXiv
Comments on this paper