103
60

Analysis of the Gibbs sampler for hierarchical inverse problems

Abstract

Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized resulting in a problem in RN\mathbb{R}^N, with an understanding that refining the discretization, that is increasing NN, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyper-parameters in such a way that they are interpretable in the continuum limit N;N \to \infty; (ii) understanding the efficiency of algorithms for probing the posterior distribution, as a function of large N.N. Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modelling framework based on a Gaussian prior for the unknown field and inverse-gamma priors for two hyper-parameters, the amplitude of the prior variance and the amplitude of the observational noise variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. We show that as NN increases, the behaviour of the algorithm has two scales: an increasingly fast for the the noise variance and an increasingly slow for the prior variance. In other words, as NN grows the Gibbs sampler convergence properties improve for sampling the amplitude of the noise variance and deteriorate for sampling the amplitude of the prior variance. We discuss a reparametrization of the prior variance that is robust with respect to the increase in dimension, preventing the slowing down.

View on arXiv
Comments on this paper