14
4

How isotropic kernels perform on simple invariants

Abstract

We investigate how the training curve of isotropic kernel methods depends on the symmetry of the task to be learned, in several settings. (i) We consider a regression task, where the target function is a Gaussian random field that depends only on dd_\parallel variables, fewer than the input dimension dd. We compute the expected test error ϵ\epsilon that follows ϵpβ\epsilon\sim p^{-\beta} where pp is the size of the training set. We find that β1/d\beta\sim 1/d independently of dd_\parallel, supporting previous findings that the presence of invariants does not resolve the curse of dimensionality for kernel regression. (ii) Next we consider support-vector binary classification and introduce the stripe model where the data label depends on a single coordinate y(x)=y(x1)y(\underline{x}) = y(x_1), corresponding to parallel decision boundaries separating labels of different signs, and consider that there is no margin at these interfaces. We argue and confirm numerically that for large bandwidth, β=d1+ξ3d3+ξ\beta = \frac{d-1+\xi}{3d-3+\xi}, where ξ(0,2)\xi\in (0,2) is the exponent characterizing the singularity of the kernel at the origin. This estimation improves classical bounds obtainable from Rademacher complexity. In this setting there is no curse of dimensionality since β1/3\beta\rightarrow 1 / 3 as dd\rightarrow\infty. (iii) We confirm these findings for the spherical model for which y(x)=y(x)y(\underline{x}) = y(|\underline{x}|). (iv) In the stripe model, we show that if the data are compressed along their invariants by some factor λ\lambda (an operation believed to take place in deep networks), the test error is reduced by a factor λ2(d1)3d3+ξ\lambda^{-\frac{2(d-1)}{3d-3+\xi}}.

View on arXiv
Comments on this paper