Bayesian inference using synthetic likelihood: asymptotics and adjustments

Implementing Bayesian inference is often computationally challenging in applications involving complex models, and sometimes calculating the likelihood itself is difficult. Synthetic likelihood is one approach for carrying out inference when the likelihood is intractable, but it is straightforward to simulate from the model. The method constructs an approximate likelihood by taking a vector summary statistic as being multivariate normal, with the unknown mean and covariance matrix estimated by simulation for any given parameter value. Our article makes three contributions. The first shows that if the summary statistic satisfies a central limit theorem, then the synthetic likelihood posterior is asymptotically normal and yields credible sets with the correct level of frequentist coverage. This result is similar to that obtained by approximate Bayesian computation. The second contribution compares the computational efficiency of Bayesian synthetic likelihood and approximate Bayesian computation using the acceptance probability for rejection and importance sampling algorithms with a "good" proposal distribution. We show that Bayesian synthetic likelihood is computationally more efficient than approximate Bayesian computation, and behaves similarly to regression-adjusted approximate Bayesian computation. Based on the asymptotic results, the third contribution proposes using adjusted inference methods when a possibly misspecified form is assumed for the covariance matrix of the synthetic likelihood, such as diagonal or a factor model, to speed up the computation. The methodology is illustrated with some simulated and real examples.
View on arXiv