ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.12461
6
19

Rates of Approximation by ReLU Shallow Neural Networks

24 July 2023
Tong Mao
Ding-Xuan Zhou
ArXivPDFHTML
Abstract

Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from H\"older spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms. Although the topic has been well investigated in the setting of deep neural networks with many layers of hidden neurons, it is still open for shallow networks having only one hidden layer. In this paper, we provide rates of uniform approximation by these networks. We show that ReLU shallow neural networks with mmm hidden neurons can uniformly approximate functions from the H\"older space W∞r([−1,1]d)W_\infty^r([-1, 1]^d)W∞r​([−1,1]d) with rates O((log⁡m)12+dm−rdd+2d+4)O((\log m)^{\frac{1}{2} +d}m^{-\frac{r}{d}\frac{d+2}{d+4}})O((logm)21​+dm−dr​d+4d+2​) when r<d/2+2r<d/2 +2r<d/2+2. Such rates are very close to the optimal one O(m−rd)O(m^{-\frac{r}{d}})O(m−dr​) in the sense that d+2d+4\frac{d+2}{d+4}d+4d+2​ is close to 111, when the dimension ddd is large.

View on arXiv
Comments on this paper