ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.00502
561
148
v1v2v3v4v5 (latest)

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

Journal des Mathématiques Pures et Appliquées (JMPA), 2021
28 February 2021
Zuowei Shen
Haizhao Yang
Shijun Zhang
ArXiv (abs)PDFHTML
Abstract

This paper concentrates on the approximation power of deep feed-forward neural networks in terms of width and depth. It is proved by construction that ReLU networks with width O(max⁡{d⌊N1/d⌋, N+2})\mathcal{O}\big(\max\{d\lfloor N^{1/d}\rfloor,\, N+2\}\big)O(max{d⌊N1/d⌋,N+2}) and depth O(L)\mathcal{O}(L)O(L) can approximate a H\"older continuous function on [0,1]d[0,1]^d[0,1]d with an approximation rate O(λd(N2L2ln⁡N)−α/d)\mathcal{O}\big(\lambda\sqrt{d} (N^2L^2\ln N)^{-\alpha/d}\big)O(λd​(N2L2lnN)−α/d), where α∈(0,1]\alpha\in (0,1]α∈(0,1] and λ>0\lambda>0λ>0 are H\"older order and constant, respectively. Such a rate is optimal up to a constant in terms of width and depth separately, while existing results are only nearly optimal without the logarithmic factor in the approximation rate. More generally, for an arbitrary continuous function fff on [0,1]d[0,1]^d[0,1]d, the approximation rate becomes O( d ωf((N2L2ln⁡N)−1/d) )\mathcal{O}\big(\,\sqrt{d}\,\omega_f\big( (N^2L^2\ln N)^{-1/d}\big)\,\big)O(d​ωf​((N2L2lnN)−1/d)), where ωf(⋅)\omega_f(\cdot)ωf​(⋅) is the modulus of continuity. We also extend our analysis to any continuous function fff on a bounded set. Particularly, if ReLU networks with depth 313131 and width O(N)\mathcal{O}(N)O(N) are used to approximate one-dimensional Lipschitz continuous functions on [0,1][0,1][0,1] with a Lipschitz constant λ>0\lambda>0λ>0, the approximation rate in terms of the total number of parameters, W=O(N2)W=\mathcal{O}(N^2)W=O(N2), becomes O(λWln⁡W)\mathcal{O}(\tfrac{\lambda}{W\ln W})O(WlnWλ​), which has not been discovered in the literature for fixed-depth ReLU networks.

View on arXiv
Comments on this paper