55
20

Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in "large nn & large pp" sparse Bayesian regression

Abstract

In a modern observational study based on healthcare databases, the number of observations and of predictors typically range in the order of 10510^5 ~ 10610^6 and of 10410^4 ~ 10510^5. Despite the large sample size, data rarely provide sufficient information to reliably estimate such a large number of parameters. Sparse regression techniques provide potential solutions, one notable approach being the Bayesian methods based on shrinkage priors. In the "large nn & large pp" setting, however, posterior computation encounters a major bottleneck at repeated sampling from a high-dimensional Gaussian distribution, whose precision matrix Φ\Phi is expensive to compute and factorize. In this article, we present a novel algorithm to speed up this bottleneck based on the following observation: we can cheaply generate a random vector bb such that the solution to the linear system Φβ=b\Phi \beta = b has the desired Gaussian distribution. We can then solve the linear system by the conjugate gradient (CG) algorithm through matrix-vector multiplications by Φ\Phi, without ever explicitly inverting Φ\Phi. Rapid convergence of CG in this specific context is achieved by the theory of prior-preconditioning we develop. We apply our algorithm to a clinically relevant large-scale observational study with nn = 72,489 patients and pp = 22,175 clinical covariates, designed to assess the relative risk of adverse events from two alternative blood anti-coagulants. Our algorithm demonstrates an order of magnitude speed-up in the posterior computation.

View on arXiv
Comments on this paper