Posterior Convergence and Model Estimation in Bayesian Change-point Problems

We study the posterior distribution of the Bayesian multiple change-point regression problem when the number and the locations of the change-points are unknown. While it is relatively easy to apply the general theory to obtain the rate up to some logarithmic factor, showing the exact parametric rate of convergence of the posterior distribution requires additional work and assumptions. Additionally, we demonstrate the asymptotic normality of the segment levels under these assumptions. For inferences on the number of change-points, we show that the Bayesian approach can produce a consistent posterior estimate. Finally, we argue that the point-wise posterior convergence property as demonstrated might have bad finite sample performance in that consistent posterior for model selection necessarily implies the maximal squared risk will be asymptotically larger than the optimal rate. This is the Bayesian version of the same phenomenon that has been noted and studied by other authors.
View on arXiv