15
2

LLM-Consensus: Multi-Agent Debate for Visual Misinformation Detection

Abstract

One of the most challenging forms of misinformation involves the out-of-context (OOC) use of images paired with misleading text, creating false narratives. Existing AI-driven detection systems lack explainability and require expensive finetuning. We address these issues with LLM-Consensus, a multi-agent debate system for OOC misinformation detection. LLM-Consensus introduces a novel multi-agent debate framework where multimodal agents collaborate to assess contextual consistency and request external information to enhance cross-context reasoning and decision-making. Our framework enables explainable detection with state-of-the-art accuracy even without domain-specific fine-tuning. Extensive ablation studies confirm that external retrieval significantly improves detection accuracy, and user studies demonstrate that LLM-Consensus boosts performance for both experts and non-experts. These results position LLM-Consensus as a powerful tool for autonomous and citizen intelligence applications.

View on arXiv
@article{lakara2025_2410.20140,
  title={ LLM-Consensus: Multi-Agent Debate for Visual Misinformation Detection },
  author={ Kumud Lakara and Georgia Channing and Juil Sock and Christian Rupprecht and Philip Torr and John Collomosse and Christian Schroeder de Witt },
  journal={arXiv preprint arXiv:2410.20140},
  year={ 2025 }
}
Comments on this paper