Sobolev Approximation of Deep ReLU Networks in Log-Barron Space
Universal approximation theorems show that neural networks can approximate any continuous function; however, the number of parameters may grow exponentially with the ambient dimension, so these results do not fully explain the practical success of deep models on high-dimensional data. Barron space theory addresses this: if a target function belongs to a Barron space, a two-layer network with parameters achieves an approximation error in . Yet classical Barron spaces still require stronger regularity than Sobolev spaces , and existing depth-sensitive results often assume constraints such as . In this paper, we introduce a log-weighted Barron space , which requires a strictly weaker assumption than for any . For this new function space, we first study embedding properties and carry out a statistical analysis via the Rademacher complexity. Then we prove that functions in can be approximated by deep ReLU networks with explicit depth dependence. We then define a family , establish approximation bounds in the norm, and identify maximal depth scales under which these rates are preserved. Our results clarify how depth reduces regularity requirements for efficient representation, offering a more precise explanation for the performance of deep architectures beyond the classical Barron setting, and for their stable use in high-dimensional problems used today.
View on arXiv