100

Statistical Learning Guarantees for Group-Invariant Barron Functions

Main:24 Pages
3 Figures
Bibliography:5 Pages
Abstract

We investigate the generalization error of group-invariant neural networks within the Barron framework. Our analysis shows that incorporating group-invariant structures introduces a group-dependent factor δG,Γ,σ1\delta_{G,\Gamma,\sigma} \le 1 into the approximation rate. When this factor is small, group invariance yields substantial improvements in approximation accuracy. On the estimation side, we establish that the Rademacher complexity of the group-invariant class is no larger than that of the non-invariant counterpart, implying that the estimation error remains unaffected by the incorporation of symmetry. Consequently, the generalization error can improve significantly when learning functions with inherent group symmetries. We further provide illustrative examples demonstrating both favorable cases, where δG,Γ,σG1\delta_{G,\Gamma,\sigma}\approx |G|^{-1}, and unfavorable ones, where δG,Γ,σ1\delta_{G,\Gamma,\sigma}\approx 1. Overall, our results offer a rigorous theoretical foundation showing that encoding group-invariant structures in neural networks leads to clear statistical advantages for symmetric target functions.

View on arXiv
Comments on this paper