Bayesian Alignments of Warped Multi-Output Gaussian Processes
We present a Bayesian extension to convolution processes which defines a representation between multiple functions via an embedding in a shared latent space. The proposed model allows for both arbitrary alignments of the inputs and non-parametric output warpings to transform the observations. This gives rise to multiple deep Gaussian process models connected via latent generating processes. We derive an efficient variational approximation based on nested variational compression and show how the model can be used to extract shared information between dependent time series, recovering an interpretable functional decomposition of the learning problem. The method is applied to both an artificial data set and measurements of a pair of wind turbines.
View on arXiv