409

Unmasking the Imposters: In-Domain Detection of Human vs. Machine-Generated Tweets

International Conference on Computational Linguistics (COLING), 2024
Main:8 Pages
11 Figures
Bibliography:4 Pages
13 Tables
Appendix:6 Pages
Abstract

The rapid development of large language models (LLMs) has significantly improved the generation of fluent and convincing text, raising concerns about their misuse on social media platforms. We present a methodology using Twitter datasets to examine the generative capabilities of four LLMs: Llama 3, Mistral, Qwen2, and GPT4o. We evaluate 7B and 8B parameter base-instruction models of the three open-source LLMs and validate the impact of further fine-tuning and "uncensored" versions. Our findings show that "uncensored" models with additional in-domain fine-tuning dramatically reduce the effectiveness of automated detection methods. This study addresses a gap by exploring smaller open-source models and the effects of "uncensoring," providing insights into how fine-tuning and content moderation influence machine-generated text detection.

View on arXiv
Comments on this paper