Marginal inferential models: prior-free probabilistic inference on
interest parameters
Inferential models (IMs) provide a general framework for prior-free, frequency-calibrated, posterior probabilistic inference. The fundamental idea is to use auxiliary variables to reason with uncertainty about the parameter of interest. When nuisance parameters are present, a marginalization step can reduce the dimension of the auxiliary variable, which in turn leads to more efficient inference. For regular problems, exact and efficient marginalization can be achieved, and we prove that the marginal IM is valid. We show that our approach provides efficient marginal inference in several challenging problems, including a many-normal-means problem, and does not suffer from common marginalization paradoxes. In non-regular problems, we propose a generalized marginalization technique which is valid and also paradox-free. Details are given for two benchmark examples, namely, the Behrens--Fisher and gamma mean problems.
View on arXiv