487

Generalized Kernel Thinning

Abstract

The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses an nn point distributional summary into a n\sqrt n point summary with better-than-Monte-Carlo maximum mean discrepancy for a target kernel k\mathbf{k} by leveraging a less smooth square-root kernel. Here we provide four improvements. First, we show that KT applied directly to the target kernel yields a tighter O(logn/n)\mathcal{O}(\sqrt{\log n/n}) integration error bound for each function ff in the reproducing kernel Hilbert space. This modification extends the reach of KT to any kernel -- even non-smooth kernels that do not admit a square-root, demonstrates that KT is suitable even for heavy-tailed target distributions, and eliminates the exponential dimension-dependence and (logn)d/2(\log n)^{d/2} factors of standard square-root KT. Second, we show that, for analytic kernels, like Gaussian and inverse multiquadric, target kernel KT admits maximum mean discrepancy (MMD) guarantees comparable to square-root KT without the need for an explicit square-root kernel. Third, we prove KT with a fractional α\alpha-power kernel kα\mathbf{k}_{\alpha} for α>1/2\alpha > 1/2 yields better-than-Monte-Carlo MMD guarantees for non-smooth kernels, like Laplace and \Matern, that do not have square-roots. Fourth, we establish that KT applied to a sum of k\mathbf{k} and kα\mathbf{k}_{\alpha} (a procedure we call KT+) simultaneously inherits the improved MMD guarantees of power KT and the tighter individual function guarantees of KT on the target kernel. Finally, we illustrate the practical benefits of target KT and KT+ for compression after high-dimensional independent sampling and challenging Markov chain Monte Carlo posterior inference.

View on arXiv
Comments on this paper