ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19856
  4. Cited By
DevEval: A Manually-Annotated Code Generation Benchmark Aligned with
  Real-World Code Repositories

DevEval: A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories

30 May 2024
Jia Li
Ge Li
Yunfei Zhao
Yongming Li
Huanyu Liu
Hao Zhu
Lecheng Wang
Kaibo Liu
Zheng Fang
Lanshen Wang
Jiazheng Ding
Xuanming Zhang
Yuqi Zhu
Yihong Dong
Zhi Jin
Binhua Li
Fei Huang
Yongbin Li
    ALM
ArXivPDFHTML

Papers citing "DevEval: A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories"

3 / 3 papers shown
Title
FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation
Wei Li
Xin Zhang
Zhongxin Guo
Shaoguang Mao
Wen Luo
Guangyue Peng
Yangyu Huang
Houfeng Wang
Scarlett Li
57
0
0
09 Mar 2025
CodeIF-Bench: Evaluating Instruction-Following Capabilities of Large Language Models in Interactive Code Generation
CodeIF-Bench: Evaluating Instruction-Following Capabilities of Large Language Models in Interactive Code Generation
Peiding Wang
L. Zhang
Fang Liu
Lin Shi
Minxiao Li
Bo Shen
An Fu
ELM
LRM
77
0
0
05 Mar 2025
Measuring Coding Challenge Competence With APPS
Measuring Coding Challenge Competence With APPS
Dan Hendrycks
Steven Basart
Saurav Kadavath
Mantas Mazeika
Akul Arora
...
Collin Burns
Samir Puranik
Horace He
D. Song
Jacob Steinhardt
ELM
AIMat
ALM
194
623
0
20 May 2021
1