240

Embedding Inequalities for Barron-type Spaces

Journal of Machine Learning (JML), 2023
Abstract

One of the fundamental problems in deep learning theory is understanding the approximation and generalization properties of two-layer neural networks in high dimensions. In order to tackle this issue, researchers have introduced the Barron space Bs(Ω)\mathcal{B}_s(\Omega) and the spectral Barron space Fs(Ω)\mathcal{F}_s(\Omega), where the index ss characterizes the smoothness of functions within these spaces and ΩRd\Omega\subset\mathbb{R}^d represents the input domain. However, it is still not clear what is the relationship between the two types of Barron spaces. In this paper, we establish continuous embeddings between these spaces as implied by the following inequality: for any δ(0,1),sN+\delta\in (0,1), s\in \mathbb{N}^{+} and f:ΩRf: \Omega \mapsto\mathbb{R}, it holds that \[ \delta\gamma^{\delta-s}_{\Omega}\|f\|_{\mathcal{F}_{s-\delta}(\Omega)}\lesssim_s \|f\|_{\mathcal{B}_s(\Omega)}\lesssim_s \|f\|_{\mathcal{F}_{s+1}(\Omega)}, \] where γΩ=supv2=1,xΩvTx\gamma_{\Omega}=\sup_{\|v\|_2=1,x\in\Omega}|v^Tx| and notably, the hidden constants depend solely on the value of ss. Furthermore, we provide examples to demonstrate that the lower bound is tight.

View on arXiv
Comments on this paper