11
14

Sample Amplification: Increasing Dataset Size even when Learning is Impossible

Abstract

Given data drawn from an unknown distribution, DD, to what extent is it possible to ``amplify'' this dataset and output an even larger set of samples that appear to have been drawn from DD? We formalize this question as follows: an (n,m)(n,m) amplification procedure\text{amplification procedure} takes as input nn independent draws from an unknown distribution DD, and outputs a set of m>nm > n ``samples''. An amplification procedure is valid if no algorithm can distinguish the set of mm samples produced by the amplifier from a set of mm independent draws from DD, with probability greater than 2/32/3. Perhaps surprisingly, in many settings, a valid amplification procedure exists, even when the size of the input dataset, nn, is significantly less than what would be necessary to learn DD to non-trivial accuracy. Specifically we consider two fundamental settings: the case where DD is an arbitrary discrete distribution supported on k\le k elements, and the case where DD is a dd-dimensional Gaussian with unknown mean, and fixed covariance. In the first case, we show that an (n,n+Θ(nk))\left(n, n + \Theta(\frac{n}{\sqrt{k}})\right) amplifier exists. In particular, given n=O(k)n=O(\sqrt{k}) samples from DD, one can output a set of m=n+1m=n+1 datapoints, whose total variation distance from the distribution of mm i.i.d. draws from DD is a small constant, despite the fact that one would need quadratically more data, n=Θ(k)n=\Theta(k), to learn DD up to small constant total variation distance. In the Gaussian case, we show that an (n,n+Θ(nd))\left(n,n+\Theta(\frac{n}{\sqrt{d}} )\right) amplifier exists, even though learning the distribution to small constant total variation distance requires Θ(d)\Theta(d) samples. In both the discrete and Gaussian settings, we show that these results are tight, to constant factors. Beyond these results, we formalize a number of curious directions for future research along this vein.

View on arXiv
Comments on this paper