956

Kernel Thinning

Annual Conference Computational Learning Theory (COLT), 2021
Abstract

We introduce kernel thinning, a new procedure for compressing a distribution P\mathbb{P} more effectively than i.i.d. sampling or standard thinning. Given a suitable reproducing kernel k\mathbf{k} and O(n2)\mathcal{O}(n^2) time, kernel thinning compresses an nn-point approximation to P\mathbb{P} into a n\sqrt{n}-point approximation with comparable worst-case integration error in the associated reproducing kernel Hilbert space. With high probability, the maximum discrepancy in integration error is Od(n12logn)\mathcal{O}_d(n^{-\frac{1}{2}}\sqrt{\log n}) for compactly supported P\mathbb{P} and Od(n12(logn)d+1loglogn)\mathcal{O}_d(n^{-\frac{1}{2}} \sqrt{(\log n)^{d+1}\log\log n}) for sub-exponential P\mathbb{P} on Rd\mathbb{R}^d. In contrast, an equal-sized i.i.d. sample from P\mathbb{P} suffers Ω(n14)\Omega(n^{-\frac14}) integration error. Our sub-exponential guarantees resemble the classical quasi-Monte Carlo error rates for uniform P\mathbb{P} on [0,1]d[0,1]^d but apply to general distributions on Rd\mathbb{R}^d and a wide range of common kernels. We use our results to derive explicit non-asymptotic maximum mean discrepancy bounds for Gaussian, Mat\'ern, and B-spline kernels and present two vignettes illustrating the practical benefits of kernel thinning over i.i.d. sampling and standard Markov chain Monte Carlo thinning.

View on arXiv
Comments on this paper