Metalearning and multitask learning are two frameworks for solving a group of related learning tasks more efficiently than we could hope to solve each of the individual tasks on their own. In multitask learning, we are given a fixed set of related learning tasks and need to output one accurate model per task, whereas in metalearning we are given tasks that are drawn i.i.d. from a metadistribution and need to output some common information that can be easily specialized to new tasks from the metadistribution. We consider a binary classification setting where tasks are related by a shared representation, that is, every task can be solved by a classifier of the form where is a map from features to a representation space that is shared across tasks, and is a task-specific classifier from the representation space to labels. The main question we ask is how much data do we need to metalearn a good representation? Here, the amount of data is measured in terms of the number of tasks that we need to see and the number of samples per task. We focus on the regime where is extremely small. Our main result shows that, in a distribution-free setting where the feature vectors are in , the representation is a linear map from , and the task-specific classifiers are halfspaces in , we can metalearn a representation with error using samples per task, and tasks. Learning with so few samples per task is remarkable because metalearning would be impossible with samples per task, and because we cannot even hope to learn an accurate task-specific classifier with samples per task. Our work also yields a characterization of distribution-free multitask learning and reductions between meta and multitask learning.
View on arXiv