InfoVAE: Information Maximizing Variational Autoencoders
- DRL
It has been previously observed that variational autoencoders tend to ignore the latent code when combined with a decoding distribution that is too flexible. This undermines the purpose of unsupervised representation learning. We identify the reason for this short-coming in the regularization term used in the ELBO criterion to match the variational posterior to the latent prior distribution. We show that removing this regularization term leads to a model that can still discover meaningful latent features. Even though ancestral sampling is no longer tractable, sampling is possible using a Markov chain. Furthermore, we propose a class of training criteria that use alternative divergences for the regularization term, generalizing the standard ELBO which employs KL divergence. These models can discover meaningful latent features and allow for tractable ancestral sampling. In particular, we propose an alternative based on Maximum Mean Discrepancy (MMD) that is simple to implement, robust, and has similar or better performance in every quantitative and qualitative metric we experimented on.
View on arXiv