Towards Reliable LLM-Driven Fuzz Testing: Vision and Road Ahead
Fuzz testing is a crucial component of software security assessment, yet its effectiveness heavily relies on valid fuzz drivers and diverse seed inputs. Recent advancements in Large Language Models (LLMs) offer transformative potential for automating fuzz testing (LLM4Fuzz), particularly in generating drivers and seeds. However, current LLM4Fuzz solutions face critical reliability challenges, including low driver validity rates and seed quality trade-offs, hindering their practical adoption.This paper aims to examine the reliability bottlenecks of LLM-driven fuzzing and explores potential research directions to address these limitations. It begins with an overview of the current development of LLM4SE and emphasizes the necessity for developing reliable LLM4Fuzz solutions. Following this, the paper envisions a vision where reliable LLM4Fuzz transforms the landscape of software testing and security for industry, software development practitioners, and economic accessibility. It then outlines a road ahead for future research, identifying key challenges and offering specific suggestions for the researchers to consider. This work strives to spark innovation in the field, positioning reliable LLM4Fuzz as a fundamental component of modern software testing.
View on arXiv@article{cheng2025_2503.00795, title={ Towards Reliable LLM-Driven Fuzz Testing: Vision and Road Ahead }, author={ Yiran Cheng and Hong Jin Kang and Lwin Khin Shar and Chaopeng Dong and Zhiqiang Shi and Shichao Lv and Limin Sun }, journal={arXiv preprint arXiv:2503.00795}, year={ 2025 } }