The illusion of power: How the statistical significance filter leads to
overconfident expectations of replicability
We show that publishing results using the statistical significance filter---publishing only when the p-value is less than 0.05---leads to a vicious cycle of overoptimistic expectation of the replicability of results. First, we show through a simple derivation that when true statistical power is relatively low, computing power based on statistically significant results will lead to overestimates of power. Then, we present a case study using 10 experimental comparisons drawn from a recently published meta-analysis in psycholinguistics (J\"ager et al., 2017). We show that the statistically significant results yield an illusion of replicability, i.e., an illusion that power is high. This illusion holds even if the researcher doesn't conduct any formal power analysis but just uses statistical significance to informally assess robustness of results.
View on arXiv