384

Scalable Bayesian regression in high dimensions with multiple data sources

Abstract

Many current applications of high-dimensional regression involve multiple sources or types of covariates. We propose methodology for this setting, motivated by biomedical applications in the "wide data" regime with large total dimensionality p and sample size n<<p. As a starting point, we formulate a flexible ridge-type prior with shrinkage levels that are specific to each data type or source. These multiple shrinkage levels are set automatically using empirical Bayes. Importantly, all estimation, including setting of shrinkage levels, can be formulated mainly in terms of inner product matrices of size n x n, rendering computation fast and scalable in the wide data setting, even for millions of predictors, with the resulting procedures being free of user-set tuning parameters. We consider sparse extensions via constrained minimization of a certain Kullback-Leibler divergence. This includes a relaxed variant that scales to large p, allows adaptive and source-specific shrinkage and has a closed-form solution. We compare these approaches to standard high-dimensional methods in a simulation study based on biological data.

View on arXiv
Comments on this paper