Nested particle filters for online parameter estimation in discrete-time state-space Markov models

We address the problem of estimating the fixed parameters of a state-space dynamic system using sequential Monte Carlo methods. The proposed approach relies on a nested structure that employs two layers of particle filters to approximate the posterior probability law of the static parameters and the dynamic variables of the system of interest. This is similar to the recent SMC^2 method, but the proposed algorithm operates in a purely recursive manner. In particular, the computational complexity of the recursive steps of the proposed method is constant over time and the scheme for the rejuvenation of the particles in the parameter space is simpler. We analyze the approximation of integrals of real bounded functions w.r.t. the posterior distribution of the system parameters computed via the proposed scheme. For a finite time horizon, we prove that the approximation errors converge toward 0 in L_p (p>=1), under mild assumptions, with rate cst/\sqrt{N}, where N is the number of samples in the parameter space. Under a set of stronger assumptions related to the stability of the optimal filter for the model, the compactness of the parameter and state space and certain bounds on the family of likelihood functions, we prove that the convergence of the L_p norms of the approximation errors is uniform over time, and provide an explicit rate function. The uniform convergence result has some relevant consequences. One of them is that the proposed scheme can asymptotically identify the parameter values for a broad class of (non ambiguous) state-space models. Uniform convergence, together with the rejuvenation or jittering step of the algorithm, also provides a positive lower bound, uniform over time and the number of particles, for the normalized effective sample size the filter. We conclude the paper with a simple numerical example that illustrates some of the theoretical findings.
View on arXiv