Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in "large & large " sparse Bayesian regression

In a modern observational study based on healthcare databases, the number of observations and of predictors typically range in the order of ~ and of ~ . Despite the large sample size, data rarely provide sufficient information to reliably estimate such a large number of parameters. Sparse regression techniques provide potential solutions, one notable approach being the Bayesian methods based on shrinkage priors. In the "large & large " setting, however, posterior computation encounters a major bottleneck at repeated sampling from a high-dimensional Gaussian distribution, whose precision matrix is expensive to compute and factorize. In this article, we present a novel algorithm to speed up this bottleneck based on the following observation: we can cheaply generate a random vector such that the solution to the linear system has the desired Gaussian distribution. We can then solve the linear system by the conjugate gradient (CG) algorithm through matrix-vector multiplications by , without ever explicitly inverting . Rapid convergence of CG in this specific context is achieved by the theory of prior-preconditioning we develop. We apply our algorithm to a clinically relevant large-scale observational study with = 72,489 patients and = 22,175 clinical covariates, designed to assess the relative risk of adverse events from two alternative blood anti-coagulants. Our algorithm demonstrates an order of magnitude speed-up in the posterior computation.
View on arXiv