ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.13978
24
2

Metalearning with Very Few Samples Per Task

21 December 2023
Maryam Aliakbarpour
Konstantina Bairaktari
Gavin Brown
Adam D. Smith
Nathan Srebro
Jonathan Ullman
    VLM
ArXivPDFHTML
Abstract

Metalearning and multitask learning are two frameworks for solving a group of related learning tasks more efficiently than we could hope to solve each of the individual tasks on their own. In multitask learning, we are given a fixed set of related learning tasks and need to output one accurate model per task, whereas in metalearning we are given tasks that are drawn i.i.d. from a metadistribution and need to output some common information that can be easily specialized to new tasks from the metadistribution. We consider a binary classification setting where tasks are related by a shared representation, that is, every task PPP can be solved by a classifier of the form fP∘hf_{P} \circ hfP​∘h where h∈Hh \in Hh∈H is a map from features to a representation space that is shared across tasks, and fP∈Ff_{P} \in FfP​∈F is a task-specific classifier from the representation space to labels. The main question we ask is how much data do we need to metalearn a good representation? Here, the amount of data is measured in terms of the number of tasks ttt that we need to see and the number of samples nnn per task. We focus on the regime where nnn is extremely small. Our main result shows that, in a distribution-free setting where the feature vectors are in Rd\mathbb{R}^dRd, the representation is a linear map from Rd→Rk\mathbb{R}^d \to \mathbb{R}^kRd→Rk, and the task-specific classifiers are halfspaces in Rk\mathbb{R}^kRk, we can metalearn a representation with error ε\varepsilonε using n=k+2n = k+2n=k+2 samples per task, and d⋅(1/ε)O(k)d \cdot (1/\varepsilon)^{O(k)}d⋅(1/ε)O(k) tasks. Learning with so few samples per task is remarkable because metalearning would be impossible with k+1k+1k+1 samples per task, and because we cannot even hope to learn an accurate task-specific classifier with k+2k+2k+2 samples per task. Our work also yields a characterization of distribution-free multitask learning and reductions between meta and multitask learning.

View on arXiv
Comments on this paper