34
0

LongCodeBench: Evaluating Coding LLMs at 1M Context Windows

Abstract

Context lengths for models have grown rapidly, from thousands to millions of tokens in just a few years. The extreme context sizes of modern long-context models have made it difficult to construct realistic long-context benchmarks -- not only due to the cost of collecting million-context tasks but also in identifying realistic scenarios that require significant contexts. We identify code comprehension and repair as a natural testbed and challenge task for long-context models and introduce LongCodeBench (LCB), a benchmark to test LLM coding abilities in long-context scenarios. Our benchmark tests both the comprehension and repair capabilities of LCLMs in realistic and important settings by drawing from real-world GitHub issues and constructing QA (LongCodeQA) and bug fixing (LongSWE-Bench) tasks. We carefully stratify the complexity of our benchmark, enabling us to evaluate models across different scales -- ranging from Qwen2.5 14B Instruct to Google's flagship Gemini model. We find that long-context remains a weakness for all models, with performance drops such as from 29% to 3% for Claude 3.5 Sonnet, or from 70.2% to 40% for Qwen2.5.

View on arXiv
@article{rando2025_2505.07897,
  title={ LongCodeBench: Evaluating Coding LLMs at 1M Context Windows },
  author={ Stefano Rando and Luca Romani and Alessio Sampieri and Yuta Kyuragi and Luca Franco and Fabio Galasso and Tatsunori Hashimoto and John Yang },
  journal={arXiv preprint arXiv:2505.07897},
  year={ 2025 }
}
Comments on this paper