140
87

The Bayes oracle and asymptotic optimality of multiple testing procedures under sparsity

Abstract

We investigate the asymptotic optimality of a large class of multiple testing rules using the framework of Bayesian Decision Theory. We consider a parametric setup, in which observations come from a normal scale mixture model and assume that the total loss is the sum of losses for individual tests. Our model can be used for testing point null hypotheses of no signals (zero effects), as well as to distinguish large signals from a multitude of very small effects. The optimality of a rule is proved by showing that, within our chosen asymptotic framework, the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme under which the proportion p of "true" alternatives converges to zero. We fully characterize the class of fixed threshold multiple testing rules which are asymptotically optimal and hence derive conditions for the asymptotic optimality of rules controlling the Bayesian False Discovery Rate (BFDR). We also provide conditions under which the popular Benjamini-Hochberg and Bonferroni procedures are asymptotically optimal and show that for a wide class of sparsity levels, the threshold of the former can be approximated very well by a non-random threshold. As far as we know, this is the first proof of the decision theoretic asymptotic optimality of the Benjamini-Hochberg rule in the context of hypothesis testing.

View on arXiv
Comments on this paper