Embedding Inequalities for Barron-type Spaces

One of the fundamental problems in deep learning theory is understanding the approximation and generalization properties of two-layer neural networks in high dimensions. In order to tackle this issue, researchers have introduced the Barron space and the spectral Barron space , where the index characterizes the smoothness of functions within these spaces and represents the input domain. However, it is still not clear what is the relationship between the two types of Barron spaces. In this paper, we establish continuous embeddings between these spaces as implied by the following inequality: for any and , it holds that \[ \delta\gamma^{\delta-s}_{\Omega}\|f\|_{\mathcal{F}_{s-\delta}(\Omega)}\lesssim_s \|f\|_{\mathcal{B}_s(\Omega)}\lesssim_s \|f\|_{\mathcal{F}_{s+1}(\Omega)}, \] where and notably, the hidden constants depend solely on the value of . Furthermore, we provide examples to demonstrate that the lower bound is tight.
View on arXiv