32
0
v1v2 (latest)

Can LLMs Generate Reliable Test Case Generators? A Study on Competition-Level Programming Problems

Main:10 Pages
27 Figures
Bibliography:1 Pages
6 Tables
Appendix:26 Pages
Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities in code generation, capable of tackling complex tasks during inference. However, the extent to which LLMs can be utilized for code checking or debugging through test case generation remains largely unexplored. We investigate this problem from the perspective of competition-level programming (CP) programs and propose TCGBench, a Benchmark for (LLM generation of) Test Case Generators. This benchmark comprises two tasks, aimed at studying the capabilities of LLMs in (1) generating valid test case generators for a given CP problem, and further (2) generating targeted test case generators that expose bugs in human-written code. Experimental results indicate that while state-of-the-art LLMs can generate valid test case generators in most cases, most LLMs struggle to generate targeted test cases that reveal flaws in human code effectively. Especially, even advanced reasoning models (e.g., o3-mini) fall significantly short of human performance in the task of generating targeted generators. Furthermore, we construct a high-quality, manually curated dataset of instructions for generating targeted generators. Analysis demonstrates that the performance of LLMs can be enhanced with the aid of this dataset, by both prompting and fine-tuning.

View on arXiv
@article{cao2025_2506.06821,
  title={ Can LLMs Generate Reliable Test Case Generators? A Study on Competition-Level Programming Problems },
  author={ Yuhan Cao and Zian Chen and Kun Quan and Ziliang Zhang and Yu Wang and Xiaoning Dong and Yeqi Feng and Guanzhong He and Jingcheng Huang and Jianhao Li and Yixuan Tan and Jiafu Tang and Yilin Tang and Junlei Wu and Qianyu Xiao and Can Zheng and Shouchen Zhou and Yuxiang Zhu and Yiming Huang and Tian Xie and Tianxing He },
  journal={arXiv preprint arXiv:2506.06821},
  year={ 2025 }
}
Comments on this paper