31
4
v1v2 (latest)

Minimum complexity interpolation in random features models

Abstract

Despite their many appealing properties, kernel methods are heavily affected by the curse of dimensionality. For instance, in the case of inner product kernels in Rd\mathbb{R}^d, the Reproducing Kernel Hilbert Space (RKHS) norm is often very large for functions that depend strongly on a small subset of directions (ridge functions). Correspondingly, such functions are difficult to learn using kernel methods. This observation has motivated the study of generalizations of kernel methods, whereby the RKHS norm -- which is equivalent to a weighted 2\ell_2 norm -- is replaced by a weighted functional p\ell_p norm, which we refer to as Fp\mathcal{F}_p norm. Unfortunately, tractability of these approaches is unclear. The kernel trick is not available and minimizing these norms requires to solve an infinite-dimensional convex problem. We study random features approximations to these norms and show that, for p>1p>1, the number of random features required to approximate the original learning problem is upper bounded by a polynomial in the sample size. Hence, learning with Fp\mathcal{F}_p norms is tractable in these cases. We introduce a proof technique based on uniform concentration in the dual, which can be of broader interest in the study of overparametrized models. For p=1p= 1, our guarantees for the random features approximation break down. We prove instead that learning with the F1\mathcal{F}_1 norm is NP\mathsf{NP}-hard under a randomized reduction based on the problem of learning halfspaces with noise.

View on arXiv
Comments on this paper