Evaluating Large Language Models in Vulnerability Detection Under Variable Context Windows
International Conference on Machine Learning and Applications (ICMLA), 2024
Main:3 Pages
Bibliography:2 Pages
2 Tables
Abstract
This study examines the impact of tokenized Java code length on the accuracy and explicitness of ten major LLMs in vulnerability detection. Using chi-square tests and known ground truth, we found inconsistencies across models: some, like GPT-4, Mistral, and Mixtral, showed robustness, while others exhibited a significant link between tokenized length and performance. We recommend future LLM development focus on minimizing the influence of input length for better vulnerability detection. Additionally, preprocessing techniques that reduce token count while preserving code structure could enhance LLM accuracy and explicitness in these tasks.
View on arXivComments on this paper
