ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.04360
24
7

A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networks

9 June 2022
E. M. Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
ArXivPDFHTML
Abstract

We study the fundamental limits to the expressive power of neural networks. Given two sets FFF, GGG of real-valued functions, we first prove a general lower bound on how well functions in FFF can be approximated in Lp(μ)L^p(\mu)Lp(μ) norm by functions in GGG, for any p≥1p \geq 1p≥1 and any probability measure μ\muμ. The lower bound depends on the packing number of FFF, the range of FFF, and the fat-shattering dimension of GGG. We then instantiate this bound to the case where GGG corresponds to a piecewise-polynomial feed-forward neural network, and describe in details the application to two sets FFF: H{\"o}lder balls and multivariate monotonic functions. Beside matching (known or new) upper bounds up to log factors, our lower bounds shed some light on the similarities or differences between approximation in LpL^pLp norm or in sup norm, solving an open question by DeVore et al. (2021). Our proof strategy differs from the sup norm case and uses a key probability result of Mendelson (2002).

View on arXiv
Comments on this paper