ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12278
  4. Cited By
Can LLMs Generate High-Quality Test Cases for Algorithm Problems? TestCase-Eval: A Systematic Evaluation of Fault Coverage and Exposure

Can LLMs Generate High-Quality Test Cases for Algorithm Problems? TestCase-Eval: A Systematic Evaluation of Fault Coverage and Exposure

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
13 June 2025
Zheyuan Yang
Zexi Kuang
Xue Xia
Yilun Zhao
    ELM
ArXiv (abs)PDFHTMLHuggingFace (17 upvotes)

Papers citing "Can LLMs Generate High-Quality Test Cases for Algorithm Problems? TestCase-Eval: A Systematic Evaluation of Fault Coverage and Exposure"

2 / 2 papers shown
How Many Code and Test Cases Are Enough? Evaluating Test Cases Generation from a Binary-Matrix Perspective
How Many Code and Test Cases Are Enough? Evaluating Test Cases Generation from a Binary-Matrix Perspective
Xianzhen Luo
Jinyang Huang
Wenzhen Zheng
Qingfu Zhu
Mingzheng Xu
Yiheng Xu
YuanTao Fan
L. Qin
Wanxiang Che
96
3
0
09 Oct 2025
Refining Critical Thinking in LLM Code Generation: A Faulty Premise-based Evaluation Framework
Refining Critical Thinking in LLM Code Generation: A Faulty Premise-based Evaluation Framework
Jialin Li
Jinzhe Li
Gengxu Li
Yi-Ju Chang
Yuan Wu
LRM
128
0
0
05 Aug 2025
1
Page 1 of 1