ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1510.01307
22
1
v1v2v3v4 (latest)

Asymptotic Minimaxity, Optimal Posterior Concentration and Asymptotic Bayes Optimality of Horseshoe-type Priors Under Sparsity

5 October 2015
P. Ghosh
A. Chakrabarti
ArXiv (abs)PDFHTML
Abstract

In this article, we investigate certain asymptotic optimality properties of a very broad class of one-group continuous shrinkage priors for simultaneous estimation and testing of a sparse normal mean vector. Asymptotic optimality of Bayes estimates and posterior concentration properties corresponding to the general class of one-group priors under consideration are studied where the data is assumed to be generated according to a multivariate normal distribution with a fixed unknown mean vector. Under the assumption that the number of non-zero means is known, we show that Bayes estimators arising out of this general class of shrinkage priors under study, attain the minimax risk, up to some multiplicative constant, under the l2l_2l2​ norm. In particular, it is shown that for the horseshoe-type priors such as the three parameter beta normal mixtures with parameters a=0.5,b>0a=0.5, b>0a=0.5,b>0 and the generalized double Pareto prior with shape parameter α=1\alpha=1α=1, the corresponding Bayes estimates become asymptotically minimax. Moreover, posterior distributions arising out of this general class of one-group priors are shown to contract around the true mean vector at the minimax l2l_2l2​ rate for a wide range of values of the global shrinkage parameter depending on the proportion of non-zero components of the underlying mean vector. An important and remarkable fact that emerges as a consequence of one key result essential for proving the aforesaid minimaxity result is that, within the asymptotic framework of Bogdan et al. (2011), the natural thresholding rules due to Carvalho et al. (2010) based on the horseshoe-type priors, asymptotically attain the optimal Bayes risk w.r.t. a 0−10-10−1 loss, up to the correct multiplicative constant and are thus, asymptotically Bayes optimal under sparsity (ABOS).

View on arXiv
Comments on this paper