53
349

Revisiting Classifier Two-Sample Tests

David Lopez-Paz
Maxime Oquab
Abstract

The goal of two-sample tests is to assess whether two samples, SPPnS_P \sim P^n and SQQmS_Q \sim Q^m, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the nn examples in SPS_P with a positive label, and by pairing the mm examples in SQS_Q with a negative label. If the null hypothesis "P=QP = Q" is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where PP and QQ differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.

View on arXiv
Comments on this paper