458

LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks

Abstract

Generative models are increasingly used to artificially generate various kinds of data, including high-quality images and videos. These models are used to estimate the underlying distribution of a dataset and randomly generate realistic samples according to their estimated distribution. However, the data used to train these models is often sensitive, thus prompting the need to evaluate information leakage from producing synthetic samples with generative models---specifically, whether an adversary can infer information about the data used to train the models. In this paper, we present the first membership inference attack on generative models. To mount the attack, we train a Generative Adversarial Network (GAN), which combines a discriminative and a generative model, to detect overfitting and recognize inputs that are part of training datasets by relying on the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, and show how to improve the latter using limited auxiliary knowledge of dataset samples. We test our attacks on several state-of-the-art models, such as Deep Convolutional GAN (DCGAN), Boundary Equilibrium GAN (BEGAN), and the combination of DCGAN with a Variational Autoencoder (DCGAN+VAE), using datasets consisting of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). The white-box attacks are 100% successful at inferring which samples were used to train the target model, and the black-box ones succeeds with 80% accuracy. Finally, we discuss the sensitivity of our attacks to different training parameters, and their robustness against mitigation strategies, finding that successful defenses often result in significant worse performances of the generative models in terms of training stability and/or sample quality.

View on arXiv
Comments on this paper