424

Bayesian inference using synthetic likelihood: asymptotics and adjustments

Journal of the American Statistical Association (JASA), 2019
Abstract

Implementing Bayesian inference is often computationally challenging in applications involving complex models, and sometimes calculating the likelihood itself is difficult. Synthetic likelihood is one approach for carrying out inference when the likelihood is intractable, but it is straightforward to simulate from the model. The method constructs an approximate likelihood by taking a vector summary statistic as being multivariate normal, with the unknown mean and covariance matrix estimated by simulation for any given parameter value. Our article makes three contributions. The first shows that if the summary statistic satisfies a central limit theorem, then the synthetic likelihood posterior is asymptotically normal and gives asymptotically correct Bayesian inference. This result is similar to that obtained by approximate Bayesian computation. The second contribution compares the computational efficiency of Bayesian synthetic likelihood and approximate Bayesian computation using the same importance sampling scheme. From this comparison we argue that in general synthetic likelihood is computationally more efficient. Based on the asymptotic results, the third contribution proposes using adjusted inference methods when a possibly misspecified form is assumed for the covariance matrix of the synthetic likelihood, such as diagonal or a factor model, to speed up the computation. The methodology is illustrated with some simulated and real examples.

View on arXiv
Comments on this paper