49
0

Evaluating Large Language Models in Vulnerability Detection Under Variable Context Windows

Abstract

This study examines the impact of tokenized Java code length on the accuracy and explicitness of ten major LLMs in vulnerability detection. Using chi-square tests and known ground truth, we found inconsistencies across models: some, like GPT-4, Mistral, and Mixtral, showed robustness, while others exhibited a significant link between tokenized length and performance. We recommend future LLM development focus on minimizing the influence of input length for better vulnerability detection. Additionally, preprocessing techniques that reduce token count while preserving code structure could enhance LLM accuracy and explicitness in these tasks.

View on arXiv
@article{lin2025_2502.00064,
  title={ Evaluating Large Language Models in Vulnerability Detection Under Variable Context Windows },
  author={ Jie Lin and David Mohaisen },
  journal={arXiv preprint arXiv:2502.00064},
  year={ 2025 }
}
Comments on this paper