295
v1v2 (latest)

On The Relative Error of Random Fourier Features for Preserving Kernel Distance

International Conference on Learning Representations (ICLR), 2022
Abstract

The method of random Fourier features (RFF), proposed in a seminal paper by Rahimi and Recht (NIPS'07), is a powerful technique to find approximate low-dimensional representations of points in (high-dimensional) kernel space, for shift-invariant kernels. While RFF has been analyzed under various notions of error guarantee, the ability to preserve the kernel distance with \emph{relative} error is less understood. We show that for a significant range of kernels, including the well-known Laplacian kernels, RFF cannot approximate the kernel distance with small relative error using low dimensions. We complement this by showing as long as the shift-invariant kernel is analytic, RFF with poly(ϵ1logn)\mathrm{poly}(\epsilon^{-1} \log n) dimensions achieves ϵ\epsilon-relative error for pairwise kernel distance of nn points, and the dimension bound is improved to poly(ϵ1logk)\mathrm{poly}(\epsilon^{-1}\log k) for the specific application of kernel kk-means. Finally, going beyond RFF, we make the first step towards data-oblivious dimension-reduction for general shift-invariant kernels, and we obtain a similar poly(ϵ1logn)\mathrm{poly}(\epsilon^{-1} \log n) dimension bound for Laplacian kernels. We also validate the dimension-error tradeoff of our methods on simulated datasets, and they demonstrate superior performance compared with other popular methods including random-projection and Nystr\"{o}m methods.

View on arXiv
Comments on this paper