MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs

Current multimodal misinformation detection (MMD) methods often assume a single source and type of forgery for each sample, which is insufficient for real-world scenarios where multiple forgery sources coexist. The lack of a benchmark for mixed-source misinformation has hindered progress in this field. To address this, we introduce MMFakeBench, the first comprehensive benchmark for mixed-source MMD. MMFakeBench includes 3 critical sources: textual veracity distortion, visual veracity distortion, and cross-modal consistency distortion, along with 12 sub-categories of misinformation forgery types. We further conduct an extensive evaluation of 6 prevalent detection methods and 15 Large Vision-Language Models (LVLMs) on MMFakeBench under a zero-shot setting. The results indicate that current methods struggle under this challenging and realistic mixed-source MMD setting. Additionally, we propose MMD-Agent, a novel approach to integrate the reasoning, action, and tool-use capabilities of LVLM agents, significantly enhancing accuracy and generalization. We believe this study will catalyze future research into more realistic mixed-source multimodal misinformation and provide a fair evaluation of misinformation detection methods.
View on arXiv@article{liu2025_2406.08772, title={ MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs }, author={ Xuannan Liu and Zekun Li and Peipei Li and Huaibo Huang and Shuhan Xia and Xing Cui and Linzhi Huang and Weihong Deng and Zhaofeng He }, journal={arXiv preprint arXiv:2406.08772}, year={ 2025 } }