Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection

The spread of fake news harms individuals and presents a critical social challenge that must be addressed. Although numerous algorithmic and insightful features have been developed to detect fake news, many of these features can be manipulated with style-conversion attacks, especially with the emergence of advanced language models, making it more difficult to differentiate from genuine news. This study proposes adversarial style augmentation, AdStyle, designed to train a fake news detector that remains robust against various style-conversion attacks. The primary mechanism involves the strategic use of LLMs to automatically generate a diverse and coherent array of style-conversion attack prompts, enhancing the generation of particularly challenging prompts for the detector. Experiments indicate that our augmentation strategy significantly improves robustness and detection performance when evaluated on fake news benchmark datasets.
View on arXiv@article{park2025_2406.11260, title={ Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection }, author={ Sungwon Park and Sungwon Han and Xing Xie and Jae-Gil Lee and Meeyoung Cha }, journal={arXiv preprint arXiv:2406.11260}, year={ 2025 } }