288

Contextual Learning

Abstract

Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples. Generalization to unseen samples requires prior knowledge (priors) about this function. However, there are priors that cannot be expressed by only taking the function, its input, and its output into account. In this paper, we propose contextual learning, which uses contextual data to define such priors. Contextual data are neither from the input space nor from the output space of the function, but include useful information for learning it. We exploit this information by formulating priors about how contextual data relate to the target function. Incorporating these priors regularizes learning and thereby improves generalization. Contextual learning subsumes a variety of related approaches, e.g. multi-task learning and learning using privileged information. Our contributions are (i) a new perspective that connects these previously isolated approaches, (ii) insights about how these methods incorporate useful priors by implementing different patterns, (iii) a simple way to apply them to novel problems, as well as (iv) a systematic experimental evaluation of these patterns in two supervised learning tasks.

View on arXiv
Comments on this paper