ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.10909
28
227

ResNet with one-neuron hidden layers is a Universal Approximator

28 June 2018
Hongzhou Lin
Stefanie Jegelka
ArXivPDFHTML
Abstract

We demonstrate that a very deep ResNet with stacked modules with one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in ddd dimensions, i.e. ℓ1(Rd)\ell_1(\mathbb{R}^d)ℓ1​(Rd). Because of the identity mapping inherent to ResNets, our network has alternating layers of dimension one and ddd. This stands in sharp contrast to fully connected networks, which are not universal approximators if their width is the input dimension ddd [Lu et al, 2017; Hanin and Sellke, 2017]. Hence, our result implies an increase in representational power for narrow deep networks by the ResNet architecture.

View on arXiv
Comments on this paper