A Unified Treatment of Multiple Testing with Prior Knowledge

A significant literature has arisen to study ways to employing prior knowledge to improve power and precision of multiple testing procedures. Some common forms of prior knowledge may include (a) a priori beliefs about which hypotheses are null, modeled by non-uniform prior weights; (b) differing importances of hypotheses, modeled by differing penalties for false discoveries; (c) partitions of the hypotheses into known groups, indicating (dis)similarity of hypotheses; and (d) knowledge of independence, positive dependence or arbitrary dependence between hypotheses or groups, allowing for more aggressive or conservative procedures. We present a general framework for global null testing and false discovery rate (FDR) control that allows the scientist to incorporate all four types of prior knowledge (a)-(d) simultaneously. We unify a number of existing procedures, generalize the conditions under which they are known to work, and simplify their proofs of FDR control under independence, positive and arbitrary dependence. We also present an algorithmic framework that strictly generalizes and unifies the classic algorithms of Benjamini and Hochberg [3] and Simes [25], algorithms that guard against unknown dependence [7, 9], algorithms that employ prior weights [17, 15], algorithms that use penalty weights [4], algorithms that incorporate null-proportion adaptivity [26, 27], and algorithms that make use of multiple arbitrary partitions into groups [1]. Unlike this previous work, we can simultaneously incorporate all of the four types of prior knowledge, combined with all of the three forms of dependence.
View on arXiv