ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.07890
31
6

Training on the Test Task Confounds Evaluation and Emergence

10 July 2024
Ricardo Dominguez-Olmedo
Florian E. Dorner
Moritz Hardt
    ELM
ArXivPDFHTML
Abstract

We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of practices that utilize knowledge about evaluation tasks at training time. We demonstrate that training on the test task confounds both relative model evaluations and claims about emergent capabilities. We argue that the seeming superiority of one model family over another may be explained by a different degree of training on the test task. To this end, we propose an effective method to adjust for the effect of training on the test task on benchmark evaluations. Put simply, to fine-tune each model under comparison on the same task-relevant data prior to evaluation. We then show that instances of emergent behavior disappear gradually as models train on the test task. Our work promotes a new perspective on the evaluation of large language models, with broad implications for benchmarking and the study of emergent capabilities.

View on arXiv
@article{dominguez-olmedo2025_2407.07890,
  title={ Training on the Test Task Confounds Evaluation and Emergence },
  author={ Ricardo Dominguez-Olmedo and Florian E. Dorner and Moritz Hardt },
  journal={arXiv preprint arXiv:2407.07890},
  year={ 2025 }
}
Comments on this paper