Asymptotic Properties of Bayes Risk of a General class of Normal Scale Mixture Priors Under Sparsity

In this article, we investigate the optimality properties of multiple testing rules induced by a general class of normal scale mixture priors commonly used in sparse Bayesian estimation and prediction problems. We consider a Bayesian decision theoretic framework, where the data is assumed to be generated according to a two-component mixture of normal distributions and the total loss is considered to be the sum of individual losses incurred in each test. We consider a general class of normal scale-mixture priors whose form is given in Polson and Scott [2011] and which is large enough to include some of the very well known families of shrinkage priors, such as, the Hypergeometric Inverted-Beta priors, the Generalized Double Pareto priors, the Three Parameter Beta priors, the Inverse-Gamma priors, and many more. We established that under the same asymptotic framework of Bogdan, Chakrabarti, Frommlet and Ghosh [2011], the multiple testing rules induced by this general class of normal scale mixture priors, asymptotically attains the optimal Bayes risk upto a factor of , a result which is similar to that obtained for the Horseshoe prior in Datta and Ghosh [2013]. Our theoretical results also lead to the proof of an open problem involving the optimality property of the Double Pareto priors conjectured by Datta and Ghosh [2013].
View on arXiv