SINCon: Mitigate LLM-Generated Malicious Message Injection Attack for Rumor Detection

In the era of rapidly evolving large language models (LLMs), state-of-the-art rumor detection systems, particularly those based on Message Propagation Trees (MPTs), which represent a conversation tree with the post as its root and the replies as its descendants, are facing increasing threats from adversarial attacks that leverage LLMs to generate and inject malicious messages. Existing methods are based on the assumption that different nodes exhibit varying degrees of influence on predictions. They define nodes with high predictive influence as important nodes and target them for attacks. If the model treats nodes' predictive influence more uniformly, attackers will find it harder to target high predictive influence nodes. In this paper, we propose Similarizing the predictive Influence of Nodes with Contrastive Learning (SINCon), a defense mechanism that encourages the model to learn graph representations where nodes with varying importance have a more uniform influence on predictions. Extensive experiments on the Twitter and Weibo datasets demonstrate that SINCon not only preserves high classification accuracy on clean data but also significantly enhances resistance against LLM-driven message injection attacks.
View on arXiv@article{zhang2025_2504.07135, title={ SINCon: Mitigate LLM-Generated Malicious Message Injection Attack for Rumor Detection }, author={ Mingqing Zhang and Qiang Liu and Xiang Tao and Shu Wu and Liang Wang }, journal={arXiv preprint arXiv:2504.07135}, year={ 2025 } }