Who Wrote This? The Key to Zero-Shot LLM-Generated Text Detection Is GECScore

The efficacy of detectors for texts generated by large language models (LLMs) substantially depends on the availability of large-scale training data. However, white-box zero-shot detectors, which require no such data, are limited by the accessibility of the source model of the LLM-generated text. In this paper, we propose a simple yet effective black-box zero-shot detection approach based on the observation that, from the perspective of LLMs, human-written texts typically contain more grammatical errors than LLM-generated texts. This approach involves calculating the Grammar Error Correction Score (GECScore) for the given text to differentiate between human-written and LLM-generated text. Experimental results show that our method outperforms current state-of-the-art (SOTA) zero-shot and supervised methods, achieving an average AUROC of 98.62% across XSum and Writing Prompts dataset. Additionally, our approach demonstrates strong reliability in the wild, exhibiting robust generalization and resistance to paraphrasing attacks. Data and code are available at:this https URL.
View on arXiv@article{wu2025_2405.04286, title={ Who Wrote This? The Key to Zero-Shot LLM-Generated Text Detection Is GECScore }, author={ Junchao Wu and Runzhe Zhan and Derek F. Wong and Shu Yang and Xuebo Liu and Lidia S. Chao and Min Zhang }, journal={arXiv preprint arXiv:2405.04286}, year={ 2025 } }