Y-GAN: Learning Dual Data Representations for Efficient Anomaly Detection

We propose a novel reconstruction-based model for anomaly detection, called Y-GAN. The model consists of a Y-shaped auto-encoder and represents images in two separate latent spaces. The first captures meaningful image semantics, key for representing (normal) training data, whereas the second encodes low-level residual image characteristics. To ensure the dual representations encode mutually exclusive information, a disentanglement procedure is designed around a latent (proxy) classifier. Additionally, a novel consistency loss is proposed to prevent information leakage between the latent spaces. The model is trained in a one-class learning setting using normal training data only. Due to the separation of semantically-relevant and residual information, Y-GAN is able to derive informative data representations that allow for efficient anomaly detection across a diverse set of anomaly detection tasks. The model is evaluated in comprehensive experiments with several recent anomaly detection models using four popular datasets, i.e., MNIST, FMNIST and CIFAR10, and PlantVillage.
View on arXiv