Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders
Deep neural networks have great predictive power when they are applied to the in-distribution test data, but they tend to predict incorrect outputs, highly confidently, for out-of-distribution (OOD) samples. Hence, it is of utmost importance to detect anomalous samples before deploying deep systems in the real world. The use of deep generative models for anomaly detection has shown great promise owing to their ability to learn proper representations of complex input data distributions. However, earlier studies have shown that generative models assign higher likelihoods to OOD samples compared to the data distribution they have been trained on. In this work, we propose Adversarial Mirrored AutoEncoder (AMA), a simple modification to Adversarial Autoencoder where we use a Mirrored Wasserstein loss in the Discriminator to enforce better semantic-level reconstruction. We also propose a new metric for anomaly quantification instead of a regular reconstruction-based metric which has been used in most of the recent generative model-based anomaly detection methods. We show that in an unsupervised setting, our model outperforms or matches recent anomaly detectors based on generative models over CIFAR-10 and MNIST on both in-distribution and OOD anomalies.
View on arXiv