152

Gaussian Auto-Encoder

Abstract

Evaluating distance between sample distribution and the wanted one, usually Gaussian, is a difficult task required to train generative Auto-Encoders. After the original Variational Auto-Encoder (VAE) using KL divergence, there was claimed superiority of distances based on Wasserstein metric (WAE, SWAE) and L2L_2 distance of KDE Gaussian smoothened sample for all 1D projections (CWAE). This article derives formulas for also L2L_2 distance of KDE Gaussian smoothened sample, but this time directly using multivariate Gaussians, also optimizing position-dependent covariance matrix with mean-field approximation, for application in purely Gaussian Auto-Encoder (GAE).

View on arXiv
Comments on this paper