37
0

CodeIF-Bench: Evaluating Instruction-Following Capabilities of Large Language Models in Interactive Code Generation

Abstract

Large Language Models (LLMs) have demonstrated exceptional performance in code generation tasks and have become indispensable programming assistants for developers. However, existing code generation benchmarks primarily assess the functional correctness of code generated by LLMs in single-turn interactions, offering limited insight into their capabilities to generate code that strictly follows users' instructions, especially in multi-turn interaction scenarios. In this paper, we introduce CodeIF-Bench, a benchmark for evaluating LLMs' instruction-following capabilities in interactive code generation. Specifically, CodeIF-Bench incorporates nine types of verifiable instructions aligned with the real-world software development requirements, which can be independently and objectively validated through specified test cases, facilitating the evaluation of instruction-following capability in multi-turn interactions. We evaluate nine prominent LLMs using CodeIF-Bench, and the experimental results reveal a significant disparity between their basic programming capability and instruction-following capability, particularly as task complexity, context length, and the number of dialogue rounds increase.

View on arXiv
@article{wang2025_2503.22688,
  title={ CodeIF-Bench: Evaluating Instruction-Following Capabilities of Large Language Models in Interactive Code Generation },
  author={ Peiding Wang and Li Zhang and Fang Liu and Lin Shi and Minxiao Li and Bo Shen and An Fu },
  journal={arXiv preprint arXiv:2503.22688},
  year={ 2025 }
}
Comments on this paper