ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.14400
701
56
v1v2v3v4v5v6 (latest)

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

Journal of machine learning research (JMLR), 2022
25 November 2022
Jonathan W. Siegel
ArXiv (abs)PDFHTML
Abstract

Let Ω=[0,1]d\Omega = [0,1]^dΩ=[0,1]d be the unit cube in Rd\mathbb{R}^dRd. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev space Ws(Lq(Ω))W^s(L_q(\Omega))Ws(Lq​(Ω)) with error measured in Lp(Ω)L_p(\Omega)Lp​(Ω). This problem is important when studying the application of neural networks in scientific computing and has previously been completely solved only in the case p=q=∞p=q=\inftyp=q=∞. Our contribution is to provide a complete solution for all 1≤p,q≤∞1\leq p,q\leq \infty1≤p,q≤∞ and s>0s > 0s>0, including asymptotically matching upper and lower bounds. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where p>qp > qp>q. We also provide a novel method for deriving LpL_pLp​-approximation lower bounds based upon VC-dimension when p<∞p < \inftyp<∞. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

View on arXiv
Comments on this paper