ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17895
104
0

DataRater: Meta-Learned Dataset Curation

23 May 2025
Dan A. Calian
Gregory Farquhar
Iurii Kemaev
Luisa M. Zintgraf
Matteo Hessel
Jeremy Shar
Junhyuk Oh
András Gyorgy
Tom Schaul
Jeffrey Dean
Hado van Hasselt
David Silver
ArXivPDFHTML
Abstract

The quality of foundation models depends heavily on their training data. Consequently, great efforts have been put into dataset curation. Yet most approaches rely on manual tuning of coarse-grained mixtures of large buckets of data, or filtering by hand-crafted heuristics. An approach that is ultimately more scalable (let alone more satisfying) is to \emph{learn} which data is actually valuable for training. This type of meta-learning could allow more sophisticated, fine-grained, and effective curation. Our proposed \emph{DataRater} is an instance of this idea. It estimates the value of training on any particular data point. This is done by meta-learning using `meta-gradients', with the objective of improving training efficiency on held out data. In extensive experiments across a range of model scales and datasets, we find that using our DataRater to filter data is highly effective, resulting in significantly improved compute efficiency.

View on arXiv
@article{calian2025_2505.17895,
  title={ DataRater: Meta-Learned Dataset Curation },
  author={ Dan A. Calian and Gregory Farquhar and Iurii Kemaev and Luisa M. Zintgraf and Matteo Hessel and Jeremy Shar and Junhyuk Oh and András György and Tom Schaul and Jeffrey Dean and Hado van Hasselt and David Silver },
  journal={arXiv preprint arXiv:2505.17895},
  year={ 2025 }
}
Comments on this paper