Adaptive Multiple Importance Sampling for Gaussian Processes
In applications of Gaussian processes where quantification of uncertainty is a strict requirement, it is necessary to accurately characterize the posterior distribution over Gaussian process covariance parameters. Normally, this is done by means of Markov chain Monte Carlo (MCMC) algorithms. Focusing on Gaussian process regression where the marginal likelihood is computable but expensive to evaluate, this paper studies algorithms based on importance sampling to carry out expectations under the posterior distribution over covariance parameters. The results indicate that expectations computed using Adaptive Multiple Importance Sampling converge faster per unit of computation than those computed with MCMC algorithms for models with few covariance parameters, and converge as fast as MCMC for models with up to around twenty covariance parameters.
View on arXiv