8
14

Distribution Compression in Near-linear Time

Abstract

In distribution compression, one aims to accurately summarize a probability distribution P\mathbb{P} using a small number of representative points. Near-optimal thinning procedures achieve this goal by sampling nn points from a Markov chain and identifying n\sqrt{n} points with O~(1/n)\widetilde{\mathcal{O}}(1/\sqrt{n}) discrepancy to P\mathbb{P}. Unfortunately, these algorithms suffer from quadratic or super-quadratic runtime in the sample size nn. To address this deficiency, we introduce Compress++, a simple meta-procedure for speeding up any thinning algorithm while suffering at most a factor of 44 in error. When combined with the quadratic-time kernel halving and kernel thinning algorithms of Dwivedi and Mackey (2021), Compress++ delivers n\sqrt{n} points with O(logn/n)\mathcal{O}(\sqrt{\log n/n}) integration error and better-than-Monte-Carlo maximum mean discrepancy in O(nlog3n)\mathcal{O}(n \log^3 n) time and O(nlog2n)\mathcal{O}( \sqrt{n} \log^2 n ) space. Moreover, Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor. In our benchmarks with high-dimensional Monte Carlo samples and Markov chains targeting challenging differential equation posteriors, Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time.

View on arXiv
Comments on this paper