53
0

DREAM: Disentangling Risks to Enhance Safety Alignment in Multimodal Large Language Models

Abstract

Multimodal Large Language Models (MLLMs) pose unique safety challenges due to their integration of visual and textual data, thereby introducing new dimensions of potential attacks and complex risk combinations. In this paper, we begin with a detailed analysis aimed at disentangling risks through step-by-step reasoning within multimodal inputs. We find that systematic multimodal risk disentanglement substantially enhances the risk awareness of MLLMs. Via leveraging the strong discriminative abilities of multimodal risk disentanglement, we further introduce \textbf{DREAM} (\textit{\textbf{D}isentangling \textbf{R}isks to \textbf{E}nhance Safety \textbf{A}lignment in \textbf{M}LLMs}), a novel approach that enhances safety alignment in MLLMs through supervised fine-tuning and iterative Reinforcement Learning from AI Feedback (RLAIF). Experimental results show that DREAM significantly boosts safety during both inference and training phases without compromising performance on normal tasks (namely oversafety), achieving a 16.17\% improvement in the SIUO safe\&effective score compared to GPT-4V. The data and code are available atthis https URL.

View on arXiv
@article{liu2025_2504.18053,
  title={ DREAM: Disentangling Risks to Enhance Safety Alignment in Multimodal Large Language Models },
  author={ Jianyu Liu and Hangyu Guo and Ranjie Duan and Xingyuan Bu and Yancheng He and Shilong Li and Hui Huang and Jiaheng Liu and Yucheng Wang and Chenchen Jing and Xingwei Qu and Xiao Zhang and Yingshui Tan and Yanan Wu and Jihao Gu and Yangguang Li and Jianke Zhu },
  journal={arXiv preprint arXiv:2504.18053},
  year={ 2025 }
}
Comments on this paper