12
11

Private Distribution Learning with Public Data: The View from Sample Compression

Abstract

We study the problem of private distribution learning with access to public data. In this setup, which we refer to as public-private learning, the learner is given public and private samples drawn from an unknown distribution pp belonging to a class Q\mathcal Q, with the goal of outputting an estimate of pp while adhering to privacy constraints (here, pure differential privacy) only with respect to the private samples. We show that the public-private learnability of a class Q\mathcal Q is connected to the existence of a sample compression scheme for Q\mathcal Q, as well as to an intermediate notion we refer to as list learning. Leveraging this connection: (1) approximately recovers previous results on Gaussians over Rd\mathbb R^d; and (2) leads to new ones, including sample complexity upper bounds for arbitrary kk-mixtures of Gaussians over Rd\mathbb R^d, results for agnostic and distribution-shift resistant learners, as well as closure properties for public-private learnability under taking mixtures and products of distributions. Finally, via the connection to list learning, we show that for Gaussians in Rd\mathbb R^d, at least dd public samples are necessary for private learnability, which is close to the known upper bound of d+1d+1 public samples.

View on arXiv
Comments on this paper