349

Multi-level Monte Carlo Variational Inference

Journal of machine learning research (JMLR), 2019
Abstract

We propose a framework for variance reduction using the multi-level Monte Carlo (MLMC) method. The framework is naturally compatible with reparameterized gradient estimators. We also propose a novel stochastic gradient estimation method and optimization algorithm on the MLMC method, which estimates sample size per level adaptively according to the ratio of the variance and computational cost in each iteration. Furthermore, we analyzed the convergence of the gradient in stochastic gradient descent and the quality of the gradient estimator in each optimization step on the basis of the signal-to-noise\textit{signal-to-noise} ratio. Finally, we evaluated our method by comparing it with sampling-based benchmark methods in several experiments and found that our method got closer to the optimal value and reduced gradient variance more than the other methods did.

View on arXiv
Comments on this paper