Scalable and Accurate Variational Bayes for High-Dimensional Binary
Regression Models
Modern methods for Bayesian regression beyond the Gaussian response setting are computationally impractical or inaccurate in high dimensions. As discussed in recent literature, bypassing this trade-off is still an open problem even in basic binary regression models, and there is limited theory on the quality of variational approximations in high-dimensional settings. To address this gap, we study the approximation accuracy of routine-use mean-field variational Bayes in high-dimensional probit regression with Gaussian priors, obtaining new and practically relevant results on the pathological behavior of this strategy in uncertainty quantification, estimation and prediction, that also suggest caution against maximum a posteriori estimates when p>n. Motivated by these results, we develop a new partially-factorized variational approximation for the posterior distribution of the probit coefficients that leverages a representation with global and local variables but, unlike for classical mean-field assumptions, it avoids a fully factorized approximation, and instead assumes a factorization only for local variables. We prove that the resulting approximation belongs to a tractable class of unified skew-normal densities that incorporates skewness and, unlike for state-of-the-art mean-field solutions, converges to the exact posterior density as p goes to infinity. To solve the variational optimization problem, we derive a tractable CAVI algorithm that easily scales to p in tens of thousands, and provably requires a number of iterations converging to 1 as p goes to infinity. Such findings are also illustrated in extensive empirical studies where our new solution is shown to improve the accuracy of mean-field variational Bayes for any n and p, with the magnitude of these gains being remarkable in those high-dimensional p>n settings where state-of-the-art methods are computationally impractical.
View on arXiv