80
v1v2 (latest)

Convergence Analysis of the Data Augmentation Algorithm for Bayesian Linear Regression with Non-Gaussian Errors

Abstract

Gaussian errors are sometimes inappropriate in a multivariate linear regression setting because, for example, the data contain outliers. In such situations, it is often assumed that the error density is a scale mixture of multivariate normal densities that takes the form f(ε)=0Σ12ud2ϕd(Σ12uε)h(u)duf(\varepsilon) = \int_0^\infty |\Sigma|^{-\frac{1}{2}} u^{\frac{d}{2}} \, \phi_d \big( \Sigma^{-\frac{1}{2}} \sqrt{u} \, \varepsilon \big) \, h(u) \, du, where dd is the dimension of the response, ϕd()\phi_d(\cdot) is the standard dd-variate normal density, Σ\Sigma is an unknown d×dd \times d positive definite scale matrix, and h()h(\cdot) is some fixed mixing density. Combining this alternative regression model with a default prior on the unknown parameters results in a highly intractable posterior density. Fortunately, there is a simple data augmentation (DA) algorithm and a corresponding Haar PX-DA algorithm that can be used to explore this posterior. This paper provides conditions (on hh) for geometric ergodicity of the Markov chains underlying these Markov chain Monte Carlo (MCMC) algorithms. These results are extremely important from a practical standpoint because geometric ergodicity guarantees the existence of the central limit theorems that form the basis of all the standard methods of calculating valid asymptotic standard errors for MCMC-based estimators. The main result is that, if hh converges to 0 at the origin at an appropriate rate, and 0ud2h(u)du<\int_0^\infty u^{\frac{d}{2}} \, h(u) \, du < \infty, then the DA and Haar PX-DA Markov chains are both geometrically ergodic. This result is quite far-reaching. For example, it implies the geometric ergodicity of the DA and Haar PX-DA Markov chains whenever hh is generalized inverse Gaussian, log-normal, inverted gamma (with shape parameter larger than d/2d/2), or Fr\'{e}chet (with shape parameter larger than d/2d/2).

View on arXiv
Comments on this paper