293

Contextual Learning

Abstract

Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples. Generalization to unseen samples requires making prior assumptions about this function. However, many priors cannot be defined by only taking the function, its input, and its output into account. In this paper, we propose contextual learning, which uses contextual data to define such priors. Contextual data are neither from the input space nor from the output space of the function, but include useful information for learning it. We can exploit this information by formulating priors about how contextual data relate to the target function. Incorporating these priors regularizes learning and thereby improves generalization. This facilitates many challenging learning tasks, in particular when the acquisition of training data is costly or when effective learning requires prohibitively large amounts of data. The first contribution of this paper is a unified view on contextual learning, which subsumes a variety of related approaches, such as multi-task and multi-view learning. The second contribution is a set of patterns for utilizing contextual learning for novel problems. The third contribution is a systematic experimental evaluation of these patterns in two supervised learning tasks.

View on arXiv
Comments on this paper