419

Towards Flexibility and Interpretability of Gaussian Process State-Space Model

Abstract

The Gaussian process state-space model (GPSSM) has attracted much attention over the past decade. However, the model representation power of the GPSSM is far from satisfactory. Most GPSSM studies rely on the standard Gaussian process (GP) with a preliminary kernel, such as the squared exponential (SE) kernel or Mat\'{e}rn kernel, which limits the model representation power and its application in complex scenarios. To address this issue, this paper proposes a novel class of probabilistic state-space models, called TGPSSMs. By leveraging a parametric normalizing flow, the TGPSSMs enrich the GP priors in the standard GPSSM, rendering the state-space model more flexible and expressive. Additionally, we present a scalable variational inference algorithm for learning and inference in TGPSSMs, which provides a flexible and optimal structure for the variational distribution of latent states. The algorithm is interpretable and computationally efficient owing to the sparse representation of GP and the bijective nature of normalizing flow. To further improve the learning and inference performance of the proposed algorithm, we integrate a constrained optimization framework to enhance the state-space representation capabilities and optimize the hyperparameters. The experimental results based on various synthetic and real datasets corroborate that the proposed TGPSSM yields superior learning and inference performance compared to several state-of-the-art methods. The accompanying source code is available at \url{https://github.com/zhidilin/TGPSSM}.

View on arXiv
Comments on this paper