Variational inference for generalized linear mixed models using
partially non-centered parametrizations
The effects of different parametrizations on the convergence of Bayesian computational algorithms for hierarchical models are well explored. In particular, techniques such as centered parametrization (CP), non-centered parametrization (NCP) and partially non-centered parametrization (PNCP) can be used to accelerate convergence in MCMC and EM algorithms. These ideas are not well studied, however, for variational Bayes (VB) methods. VB is a fast deterministic method for posterior approximation which has attracted increasing interest because of its suitability for large and high-dimensional data sets. Unlike MCMC and EM algorithms, the use of different parametrizations such as the CP, NCP and PNCP for VB has not only computational implications (in terms of rate of convergence) but also statistical implications as different parametrizations are associated with different factorized approximations to the posterior. Here we examine the use of PNCP in VB for generalized linear mixed models (GLMMs). Our paper makes four contributions. First, we show how to implement a recently developed algorithm in machine learning called non-conjugate variational message passing (NCVMP) for GLMMs with a focus on Poisson and logistic models. Second, we show that PNCP is able to adapt to the quantity of information in the observed data so that it is not necessary to make a choice in advance between CP and NCP with the data deciding the optimal parametrization. Third, we show that in addition to accelerating convergence, PNCP is a good strategy statistically for VB in terms of producing more accurate approximations to the posterior than either CP or NCP. Finally, we demonstrate that the variational lower bound produced as part of the computation is also often a tight lower bound on the log marginal likelihood which can be useful for model selection purposes.
View on arXiv