ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.05502
21
174

The power of deeper networks for expressing natural functions

16 May 2017
David Rolnick
Max Tegmark
ArXivPDFHTML
Abstract

It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons mmm required to approximate natural classes of multivariate polynomials of nnn variables grows only linearly with nnn for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 111 to kkk, the neuron requirement grows exponentially not with nnn but with n1/kn^{1/k}n1/k, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with nnn.

View on arXiv
Comments on this paper