ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17521
28
2

Recent Advances in Large Langauge Model Benchmarks against Data Contamination: From Static to Dynamic Evaluation

23 February 2025
Simin Chen
Yiming Chen
Zexin Li
Yifan Jiang
Zhongwei Wan
Yixin He
Dezhi Ran
Tianle Gu
H. Li
Tao Xie
Baishakhi Ray
ArXivPDFHTML
Abstract

Data contamination has received increasing attention in the era of large language models (LLMs) due to their reliance on vast Internet-derived training corpora. To mitigate the risk of potential data contamination, LLM benchmarking has undergone a transformation from static to dynamic benchmarking. In this work, we conduct an in-depth analysis of existing static to dynamic benchmarking methods aimed at reducing data contamination risks. We first examine methods that enhance static benchmarks and identify their inherent limitations. We then highlight a critical gap-the lack of standardized criteria for evaluating dynamic benchmarks. Based on this observation, we propose a series of optimal design principles for dynamic benchmarking and analyze the limitations of existing dynamic benchmarks. This survey provides a concise yet comprehensive overview of recent advancements in data contamination research, offering valuable insights and a clear guide for future research efforts. We maintain a GitHub repository to continuously collect both static and dynamic benchmarking methods for LLMs. The repository can be found at this link.

View on arXiv
@article{chen2025_2502.17521,
  title={ Recent Advances in Large Langauge Model Benchmarks against Data Contamination: From Static to Dynamic Evaluation },
  author={ Simin Chen and Yiming Chen and Zexin Li and Yifan Jiang and Zhongwei Wan and Yixin He and Dezhi Ran and Tianle Gu and Haizhou Li and Tao Xie and Baishakhi Ray },
  journal={arXiv preprint arXiv:2502.17521},
  year={ 2025 }
}
Comments on this paper