5
0

Efficient MCMC Sampling with Expensive-to-Compute and Irregular Likelihoods

Abstract

Bayesian inference with Markov Chain Monte Carlo (MCMC) is challenging when the likelihood function is irregular and expensive to compute. We explore several sampling algorithms that make use of subset evaluations to reduce computational overhead. We adapt the subset samplers for this setting where gradient information is not available or is unreliable. To achieve this, we introduce data-driven proxies in place of Taylor expansions and define a novel computation-cost aware adaptive controller. We undertake an extensive evaluation for a challenging disease modelling task and a configurable task with similar irregularity in the likelihood surface. We find our improved version of Hierarchical Importance with Nested Training Samples (HINTS), with adaptive proposals and a data-driven proxy, obtains the best sampling error in a fixed computational budget. We conclude that subset evaluations can provide cheap and naturally-tempered exploration, while a data-driven proxy can pre-screen proposals successfully in explored regions of the state space. These two elements combine through hierarchical delayed acceptance to achieve efficient, exact sampling.

View on arXiv
@article{rosato2025_2505.10448,
  title={ Efficient MCMC Sampling with Expensive-to-Compute and Irregular Likelihoods },
  author={ Conor Rosato and Harvinder Lehal and Simon Maskell and Lee Devlin and Malcolm Strens },
  journal={arXiv preprint arXiv:2505.10448},
  year={ 2025 }
}
Comments on this paper