Posterior Concentration Properties of a General Class of Shrinkage Estimators around Nearly Black Vectors

We consider the problem of estimating a high-dimensional multivariate normal mean vector when it is sparse in the sense of being nearly black. Optimality of Bayes estimates corresponding to a very general class of continuous shrinkage priors on the mean vector is studied in this work. The class of priors considered is rich enough to include a wide variety of heavy-tailed priors including the horseshoe which are in extensive use in sparse high-dimensional problems. In particular, the three parameter beta normal mixture priors, the generalized double Pareto priors, the inverse gamma priors and the normal-exponential-gamma priors fall inside this class. We work under the frequentist setting where the data is generated according to a multivariate normal distribution with a fixed unknown mean vector. Under the assumption that the number of non-zero components of the mean vector is known, we show that the Bayes estimators corresponding to this general class of priors attain the minimax risk (possibly up to a multiplicative constant) corresponding to the loss. Further an upper bound on the rate of contraction of the posterior distribution around the estimators under study is established. We also provide a lower bound to the posterior variance for an important subclass of this general class of shrinkage priors that include the generalized double Pareto priors with shape parameter , the three parameter beta normal mixtures with parameters and (including the horseshoe in particular), the inverse gamma prior with shape parameter and many other shrinkage priors. This work is inspired by the recent work of van der Pas et al (2014) on the posterior contraction properties of the horseshoe prior under the present set-up.
View on arXiv