21
1

PlagBench: Exploring the Duality of Large Language Models in Plagiarism Generation and Detection

Abstract

Recent studies have raised concerns about the potential threats large language models (LLMs) pose to academic integrity and copyright protection. Yet, their investigation is predominantly focused on literal copies of original texts. Also, how LLMs can facilitate the detection of LLM-generated plagiarism remains largely unexplored. To address these gaps, we introduce \textbf{\sf PlagBench}, a dataset of 46.5K synthetic text pairs that represent three major types of plagiarism: verbatim copying, paraphrasing, and summarization. These samples are generated by three advanced LLMs. We rigorously validate the quality of PlagBench through a combination of fine-grained automatic evaluation and human annotation. We then utilize this dataset for two purposes: (1) to examine LLMs' ability to transform original content into accurate paraphrases and summaries, and (2) to evaluate the plagiarism detection performance of five modern LLMs alongside three specialized plagiarism checkers. Our results show that GPT-3.5 Turbo can produce high-quality paraphrases and summaries without significantly increasing text complexity compared to GPT-4 Turbo. However, in terms of detection, GPT-4 outperforms other LLMs and commercial detection tools by 20%, highlights the evolving capabilities of LLMs not only in content generation but also in plagiarism detection. Data and source code are available atthis https URL.

View on arXiv
@article{lee2025_2406.16288,
  title={ PlagBench: Exploring the Duality of Large Language Models in Plagiarism Generation and Detection },
  author={ Jooyoung Lee and Toshini Agrawal and Adaku Uchendu and Thai Le and Jinghui Chen and Dongwon Lee },
  journal={arXiv preprint arXiv:2406.16288},
  year={ 2025 }
}
Comments on this paper