36
0

Large Language Models Penetration in Scholarly Writing and Peer Review

Abstract

While the widespread use of Large Language Models (LLMs) brings convenience, it also raises concerns about the credibility of academic research and scholarly processes. To better understand these dynamics, we evaluate the penetration of LLMs across academic workflows from multiple perspectives and dimensions, providing compelling evidence of their growing influence. We propose a framework with two components: \texttt{ScholarLens}, a curated dataset of human- and LLM-generated content across scholarly writing and peer review for multi-perspective evaluation, and \texttt{LLMetrica}, a tool for assessing LLM penetration using rule-based metrics and model-based detectors for multi-dimensional evaluation. Our experiments demonstrate the effectiveness of \texttt{LLMetrica}, revealing the increasing role of LLMs in scholarly processes. These findings emphasize the need for transparency, accountability, and ethical practices in LLM usage to maintain academic credibility.

View on arXiv
@article{zhou2025_2502.11193,
  title={ Large Language Models Penetration in Scholarly Writing and Peer Review },
  author={ Li Zhou and Ruijie Zhang and Xunlian Dai and Daniel Hershcovich and Haizhou Li },
  journal={arXiv preprint arXiv:2502.11193},
  year={ 2025 }
}
Comments on this paper