Mapping Learning with Partially Latent Output

A partially-latent-output mapping (PLOM) method is proposed. PLOM infers a regression function between an observed input (typically high-dimensional) and a partially-latent output (typically low-dimensional). More precisely, the vector-valued output variable is formed of both observed and unobserved components. The main and novel feature of PLOM is that it provides a framework to deal with situations where some of the output's components can be observed while the remaining components can neither be measured nor be easily annotated. Moreover, by modeling the non-observed output components as latent variables, we prevent the observed components from being contaminated with artifacts that cannot be absorbed with standard noise models. We also emphasize that the proposed formulation unifies regression and dimensionality reduction into a common framework referred to as Gaussian Locally-Linear Mapping (GLLiM). We formally derive EM inference procedures for the corresponding family of models. Tests and comparisons with state-of-the-art methods reveal the PLOM's prominent advantage to be robust to various experimental conditions.
View on arXiv