Valid and efficient imprecise-probabilistic inference with partial priors, I. First results

Between Bayesian and frequentist inference, it's commonly believed that the former is for cases where one has a prior and the latter is for cases where one has no prior. But the prior/no-prior classification isn't exhaustive, and most real-world applications fit somewhere in between these two extremes. That neither of the two dominant schools of thought are suited for these applications creates confusion and slows progress. A key observation here is that ``no prior information'' actually means no prior distribution can be ruled out, so the classically-frequentist context is best characterized as every prior. From this perspective, it's now clear that there's an entire spectrum of contexts depending on what, if any, partial prior information is available, with Bayesian (one prior) and frequentist (every prior) on opposite extremes. This paper ties the two frameworks together by formally treating those cases where only partial prior information is available using the theory of imprecise probability. The end result is a unified framework of (imprecise-probabilistic) statistical inference with a new validity condition that implies both frequentist-style error rate control for derived procedures and Bayesian-style coherence properties, relative to the given partial prior information. This new theory contains both the Bayesian and frequentist frameworks as special cases, since they're both valid in this new sense relative to their respective partial priors. Different constructions of these valid inferential models are considered, and compared based on their efficiency.
View on arXiv