ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15489
31
0

Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models

21 May 2025
Jiaying Wu
Fanxiao Li
Min-Yen Kan
Bryan Hooi
ArXivPDFHTML
Abstract

The real-world impact of misinformation stems from the underlying misleading narratives that creators seek to convey. As such, interpreting misleading creator intent is essential for multimodal misinformation detection (MMD) systems aimed at effective information governance. In this paper, we introduce an automated framework that simulates real-world multimodal news creation by explicitly modeling creator intent through two components: the desired influence and the execution plan. Using this framework, we construct DeceptionDecoded, a large-scale benchmark comprising 12,000 image-caption pairs aligned with trustworthy reference articles. The dataset captures both misleading and non-misleading intents and spans manipulations across visual and textual modalities. We conduct a comprehensive evaluation of 14 state-of-the-art vision-language models (VLMs) on three intent-centric tasks: (1) misleading intent detection, (2) misleading source attribution, and (3) creator desire inference. Despite recent advances, we observe that current VLMs fall short in recognizing misleading intent, often relying on spurious cues such as superficial cross-modal consistency, stylistic signals, and heuristic authenticity hints. Our findings highlight the pressing need for intent-aware modeling in MMD and open new directions for developing systems capable of deeper reasoning about multimodal misinformation.

View on arXiv
@article{wu2025_2505.15489,
  title={ Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models },
  author={ Jiaying Wu and Fanxiao Li and Min-Yen Kan and Bryan Hooi },
  journal={arXiv preprint arXiv:2505.15489},
  year={ 2025 }
}
Comments on this paper