112
1

Calling a Spade a Heart: Gaslighting Multimodal Large Language Models via Negation

Abstract

Multimodal Large Language Models (MLLMs) have exhibited remarkable advancements in integrating different modalities, excelling in complex understanding and generation tasks. Despite their success, MLLMs remain vulnerable to conversational adversarial inputs, particularly negation arguments. This paper systematically evaluates state-of-the-art MLLMs across diverse benchmarks, revealing significant performance drops when negation arguments are introduced to initially correct responses. Notably, we introduce the first benchmark GaslightingBench, specifically designed to evaluate the vulnerability of MLLMs to negation arguments. GaslightingBench consists of multiple-choice questions curated from existing datasets, along with generated negation prompts across 20 diverse categories. Throughout extensive evaluation, we find that proprietary models such as Gemini-1.5-flash, GPT-4o and Claude-3.5-Sonnet demonstrate better resilience compared to open-source counterparts like Qwen2-VL and LLaVA. However, all evaluated MLLMs struggle to maintain logical consistency under negation arguments during conversation. Our findings provide critical insights for improving the robustness of MLLMs against negation inputs, contributing to the development of more reliable and trustworthy multimodal AI systems.

View on arXiv
@article{zhu2025_2501.19017,
  title={ Calling a Spade a Heart: Gaslighting Multimodal Large Language Models via Negation },
  author={ Bin Zhu and Huiyan Qi and Yinxuan Gui and Jingjing Chen and Chong-Wah Ngo and Ee-Peng Lim },
  journal={arXiv preprint arXiv:2501.19017},
  year={ 2025 }
}
Comments on this paper