370

Revisiting Classifier Two-Sample Tests for GAN Evaluation and Causal Discovery

Maxime Oquab
Abstract

The goal of two-sample tests is to decide whether two probability distributions, denoted by PP and QQ, are equal. One simple method to construct flexible two-sample tests is to use binary classifiers. More specifically, pair nn random samples drawn from PP with a positive label, and pair nn random samples drawn from QQ with a negative label. If the null hypothesis "P=QP = Q" is true, the classification accuracy of a binary classifier on a hold-out subset of these data should remain near chance-level. Since the hold-out classification accuracy is an average of independent random variables under the null hypothesis, the two-sample test statistic follows a Binomial distribution. Furthermore, the decision boundary of our binary classifier provides interpretation on the differences between PP and QQ. In particular this boundary can be useful to analyze which samples were correctly or incorrectly labeled by the classifier, with the least or most confidence. The goal of this paper is to revive the interest in classifier two-sample tests for a variety of applications, including independence testing, generative model evaluation, and causal discovery. To this end, we study their fundamentals, review prior literature on their applications, compare their performance against alternative state-of-the-art two-sample tests, and propose their use to evaluate generative adversarial network models applied to image synthesis. As a novel application of our research, we propose the use of conditional generative adversarial networks, together with classifier two-sample tests, to achieve state-of-the-art causal discovery.

View on arXiv
Comments on this paper