420

Bayesian Brittleness: Why no Bayesian model is "good enough"

Abstract

Although it is known that Bayesian estimators may be inconsistent if the model is misspecified, it is also a popular belief that a "good" or "close" enough model should have good convergence properties. This paper shows that, contrary to popular belief, there is no such thing as a "close enough" model in Bayesian inference in the following sense: we derive optimal lower and upper bounds on posterior values obtained from models that exactly capture an arbitrarily large number of finite-dimensional marginals of the data-generating distribution and/or that are arbitrarily close to the data-generating distribution in the Prokhorov or total variation metrics; these bounds show that such models may still make the largest possible prediction error after conditioning on an arbitrarily large number of sample data. Therefore, under model misspecification, and without stronger assumptions than (arbitrary) closeness in Prokhorov or total variation metrics, Bayesian inference offers no better guarantee of accuracy than arbitrarily picking a value between the essential infimum and supremum of the quantity of interest. In particular, an unscrupulous practitioner could slightly perturb a given prior and model to achieve any desired posterior conclusions. Finally, this paper also addresses the non-trivial technical questions of how to incorporate priors in the OUQ framework. In particular, we develop the necessary measure theoretical foundations in the context of Polish spaces, so that simultaneously prior measures can be put on subsets of a product space of functions and measures and important quantities of interest are measurable. We also develop the reduction theory for optimization problems over measures on product spaces of measures and functions, thus laying down the foundations for the scientific computation of optimal statistical estimators.

View on arXiv
Comments on this paper