30
7
v1v2 (latest)

A probabilistic view on predictive constructions for Bayesian learning

Abstract

Given a sequence X=(X1,X2,)X=(X_1,X_2,\ldots) of random observations, a Bayesian forecaster aims to predict Xn+1X_{n+1} based on (X1,,Xn)(X_1,\ldots,X_n) for each n0n\ge 0. To this end, in principle, she only needs to select a collection σ=(σ0,σ1,)\sigma=(\sigma_0,\sigma_1,\ldots), called ``strategy" in what follows, where σ0()=P(X1)\sigma_0(\cdot)=P(X_1\in\cdot) is the marginal distribution of X1X_1 and σn()=P(Xn+1X1,,Xn)\sigma_n(\cdot)=P(X_{n+1}\in\cdot\mid X_1,\ldots,X_n) the nn-th predictive distribution. Because of the Ionescu-Tulcea theorem, σ\sigma can be assigned directly, without passing through the usual prior/posterior scheme. One main advantage is that no prior probability is to be selected. In a nutshell, this is the predictive approach to Bayesian learning. A concise review of the latter is provided in this paper. We try to put such an approach in the right framework, to make clear a few misunderstandings, and to provide a unifying view. Some recent results are discussed as well. In addition, some new strategies are introduced and the corresponding distribution of the data sequence XX is determined. The strategies concern generalized P\ólya urns, random change points, covariates and stationary sequences.

View on arXiv
Comments on this paper