Generalized Bayesian Likelihood-Free Inference Using Scoring Rules
Estimators
We propose a framework for Bayesian Likelihood-Free Inference (LFI) based on Generalized Bayesian Inference. To define the generalized posterior, we use Scoring Rules (SRs), which evaluate probabilistic models given an observation. In LFI, we can sample from the model but not evaluate the likelihood; for this reason, we employ SRs with easy empirical estimators. Our framework includes novel approaches and popular LFI techniques (such as Bayesian Synthetic Likelihood) and enjoys posterior consistency in a well-specified setting when a strictly-proper SR is used (i.e., one whose expectation is uniquely minimized when the model corresponds to the data generating process). In general, our framework does not approximate the standard posterior; as such, it is possible to achieve outlier robustness, which we prove is the case for the Kernel and Energy Scores. We also discuss a strategy for tuning the learning rate in the generalized posterior suitable for the LFI setup. We run simulations studies with correlated pseudo-marginal Markov Chain Monte Carlo and compare with related approaches on standard benchmarks and challenging intractable-likelihood models from meteorology and ecology.
View on arXiv