ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.15877
658
414
v1v2v3v4 (latest)

BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions

22 June 2024
Terry Yue Zhuo
Minh Chien Vu
Jenny Chim
Han Hu
Wenhao Yu
Ratnadira Widyasari
Imam Nur Bani Yusuf
Haolan Zhan
Junda He
Indraneil Paul
Simon Brunner
Chen Gong
Thong Hoang
A. Zebaze
Xiaoheng Hong
Wen-Ding Li
Jean Kaddour
Ming Xu
Zhihan Zhang
Prateek Yadav
Naman Jain
Alex Gu
Zhoujun Cheng
Jiawei Liu
Qian Liu
Zijian Wang
Binyuan Hui
Binyuan Hui
David Lo
Daniel Fried
Xiaoning Du
H. D. Vries
Leandro von Werra
ArXiv (abs)PDFHTMLHuggingFace (48 upvotes)
Abstract

Task automation has been greatly empowered by the recent advances in Large Language Models (LLMs) via Python code, where the tasks ranging from software engineering development to general-purpose reasoning. While current benchmarks have shown that LLMs can solve tasks using programs like human developers, the majority of their evaluations are limited to short and self-contained algorithmic tasks or standalone function calls. Solving challenging and practical tasks requires the capability of utilizing diverse function calls as tools to efficiently implement functionalities like data analysis and web development. In addition, using multiple tools to solve a task needs compositional reasoning by accurately understanding complex instructions. Fulfilling both of these characteristics can pose a great challenge forthis http URLassess how well LLMs can solve challenging and practical tasks via programs, we introduce BigCodeBench, a benchmark that challenges LLMs to invoke multiple function calls as tools from 139 libraries and 7 domains for 1,140 fine-grained tasks. To evaluate LLMs rigorously, each task encompasses 5.6 test cases with an average branch coverage of 99%. In addition, we propose a natural-language-oriented variant of BigCodeBench, BigCodeBench-Instruct, that automatically transforms the original docstrings into short instructions only with essential information. Our extensive evaluation of 60 LLMs shows that LLMs are not yet capable of following complex instructions to use function calls precisely, with scores up to 60%, significantly lower than the human performance of 97%. The results underscore the need for further advancements in this area.

View on arXiv
Comments on this paper