Counterspeech is a key strategy against harmful online content, but scaling expert-driven efforts is challenging. Large Language Models (LLMs) present a potential solution, though their use in countering conspiracy theories is under-researched. Unlike for hate speech, no datasets exist that pair conspiracy theory comments with expert-crafted counterspeech. We address this gap by evaluating the ability of GPT-4o, Llama 3, and Mistral to effectively apply counterspeech strategies derived from psychological research provided through structured prompts. Our results show that the models often generate generic, repetitive, or superficial results. Additionally, they over-acknowledge fear and frequently hallucinate facts, sources, or figures, making their prompt-based use in practical applications problematic.
View on arXiv@article{lisker2025_2504.16604, title={ Debunking with Dialogue? Exploring AI-Generated Counterspeech to Challenge Conspiracy Theories }, author={ Mareike Lisker and Christina Gottschalk and Helena Mihaljević }, journal={arXiv preprint arXiv:2504.16604}, year={ 2025 } }