392

Training opposing directed models using geometric mean matching

International Conference on Machine Learning (ICML), 2015
Abstract

Unsupervised training of deep generative models containing latent variables and performing inference remains a challenging problem for complex, high dimen- sional distributions. One basic approach to this problem is the so called Helmholtz machine and it involves training an auxiliary model that helps to perform approx- imate inference jointly with the generative model which is to be fitted to the train- ing data. The top-down generative model is typically realized as a directed model that starts from some prior at the top, down to the empirical distribution at the bot- tom. The approximate inference model runs in the opposite direction and is typi- cally trained to efficiently infer high probability latent states given some observed data. Here we propose a new method, referred to as geometric mean matching (GMM), that is based on the idea that the generative model should be close to the class of distributions that can be modeled by our approximate inference dis- tribution. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the tar- get distribution we fit to the training data to be the geometric mean of these two. We present an upper-bound for the log-likelihood of this model and we show that optimizing this bound will pressure the model to stay close to the approximate inference distributions. In the experimental section we demonstrate that we can use this approach to fit deep generative models with many layers of hidden binary stochastic variables to complex and high dimensional training distributions.

View on arXiv
Comments on this paper