412

Conditional inferential models: combining information for prior-free probabilistic inference

Abstract

The inferential model (IM) framework provides valid prior-free probabilistic inference by focusing on predicting unobserved auxiliary variables. But efficient IM-based inference can be challenging when this auxiliary variable is of dimension greater than that of the parameter. Here we show that characteristics of the auxiliary variable are often fully observed and, in such cases, a simultaneous dimension reduction and information aggregation can be achieved by conditioning. This proposed conditioning strategy leads to efficient IM inference, and casts new light on Fisher's notions of sufficiency, conditioning, and also Bayesian inference. A differential equation-driven selection of a conditional association is developed, and we prove a conditional IM validity theorem under some conditions. For problems that do not admit a valid conditional IM of the standard form, we propose a more flexible class of conditional IMs based on localization. Illustrations in a bivariate normal and variance-components examples are also given.

View on arXiv
Comments on this paper