ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.07565
14
5

On Leakage of Code Generation Evaluation Datasets

10 July 2024
Alexandre Matton
Tom Sherborne
Dennis Aumiller
Elena Tommasone
Milad Alizadeh
Jingyi He
Raymond Ma
Maxime Voisin
Ellen Gilsenan-McMahon
Matthias Gallé
ArXivPDFHTML
Abstract

In this paper we consider contamination by code generation test sets, in particular in their use in modern large language models. We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection. Key to our findings is a new dataset of 161 prompts with their associated python solutions, dataset which is released at https://huggingface.co/datasets/CohereForAI/lbpp .

View on arXiv
Comments on this paper