Learning of state-space models with highly informative observations: a tempered Sequential Monte Carlo solution

Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty based on probability theory. Recent advances in Monte Carlo based methods have made previously intractable problem possible to solve using only the computational power available in a standard personal computer. For probabilistic learning of unknown parameters in nonlinear state-space models, methods based on the particle filter have proven useful. However, a notoriously challenging problem occurs when the observations are highly informative, i.e. when there is very little or no measurement noise present. The particle filter will then struggle in estimating one of the basic component in most parameter learning algorithms, the likelihood p(data|parameters). To this end we suggest an algorithm which initially assumes that there is artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we in the end recover the original problem or possibly a very close approximation of it. Computationally the parameters are learned using a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC^2 method. Another natural link is also made to the ideas underlying the so-called approximate Bayesian computation (ABC). We provide a theoretical justification (implying convergence results) for the suggested approach. We also illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark.
View on arXiv