38
0

Short-PHD: Detecting Short LLM-generated Text with Topological Data Analysis After Off-topic Content Insertion

Abstract

The malicious usage of large language models (LLMs) has motivated the detection of LLM-generated texts. Previous work in topological data analysis shows that the persistent homology dimension (PHD) of text embeddings can serve as a more robust and promising score than other zero-shot methods. However, effectively detecting short LLM-generated texts remains a challenge. This paper presents Short-PHD, a zero-shot LLM-generated text detection method tailored for short texts. Short-PHD stabilizes the estimation of the previous PHD method for short texts by inserting off-topic content before the given input text and identifies LLM-generated text based on an established detection threshold. Experimental results on both public and generated datasets demonstrate that Short-PHD outperforms existing zero-shot methods in short LLM-generated text detection. Implementation codes are available online.

View on arXiv
@article{wei2025_2504.02873,
  title={ Short-PHD: Detecting Short LLM-generated Text with Topological Data Analysis After Off-topic Content Insertion },
  author={ Dongjun Wei and Minjia Mao and Xiao Fang and Michael Chau },
  journal={arXiv preprint arXiv:2504.02873},
  year={ 2025 }
}
Comments on this paper