15
v1v2 (latest)

Sobolev Approximation of Deep ReLU Networks in Log-Barron Space

Changhoon Song
Seungchan Ko
Youngjoon Hong
Main:24 Pages
4 Figures
Bibliography:3 Pages
1 Tables
Abstract

Universal approximation theorems show that neural networks can approximate any continuous function; however, the number of parameters may grow exponentially with the ambient dimension, so these results do not fully explain the practical success of deep models on high-dimensional data. Barron space theory addresses this: if a target function belongs to a Barron space, a two-layer network with nn parameters achieves an O(n1/2)O(n^{-1/2}) approximation error in L2L^2. Yet classical Barron spaces Bs+1\mathscr{B}^{s+1} still require stronger regularity than Sobolev spaces HsH^s, and existing depth-sensitive results often assume constraints such as sL1/2sL \le 1/2. In this paper, we introduce a log-weighted Barron space Blog\mathscr{B}^{\log}, which requires a strictly weaker assumption than Bs\mathscr{B}^s for any s>0s>0. For this new function space, we first study embedding properties and carry out a statistical analysis via the Rademacher complexity. Then we prove that functions in Blog\mathscr{B}^{\log} can be approximated by deep ReLU networks with explicit depth dependence. We then define a family Bs,log\mathscr{B}^{s,\log}, establish approximation bounds in the H1H^1 norm, and identify maximal depth scales under which these rates are preserved. Our results clarify how depth reduces regularity requirements for efficient representation, offering a more precise explanation for the performance of deep architectures beyond the classical Barron setting, and for their stable use in high-dimensional problems used today.

View on arXiv
Comments on this paper