36
0

Debiasing Multimodal Large Language Models via Noise-Aware Preference Optimization

Abstract

Multimodal Large Language Models excel in various tasks, yet often struggle with modality bias, where the model tends to rely heavily on a single modality and overlook critical information in other modalities, which leads to incorrect focus and generating irrelevant responses. In this paper, we propose using the paradigm of preference optimization to solve the modality bias problem, including RLAIFVBias, a debiased preference optimization dataset, and a Noise Aware Preference Optimization algorithm. Specifically, we first construct the dataset by introducing perturbations to reduce the informational content of certain modalities, compelling the model to rely on a specific modality when generating negative responses. To address the inevitable noise in automatically constructed data, we combine the noise robust Mean Absolute Error with the Binary Cross Entropy in Direct Preference Optimization by a negative Box Cox transformation, and dynamically adjust the algorithm noise robustness based on the evaluated noise levels in the data. Extensive experiments validate our approach, demonstrating not only its effectiveness in mitigating modality bias but also its significant role in minimizing hallucinations.

View on arXiv
@article{zhang2025_2503.17928,
  title={ Debiasing Multimodal Large Language Models via Noise-Aware Preference Optimization },
  author={ Zefeng Zhang and Hengzhu Tang and Jiawei Sheng and Zhenyu Zhang and Yiming Ren and Zhenyang Li and Dawei Yin and Duohe Ma and Tingwen Liu },
  journal={arXiv preprint arXiv:2503.17928},
  year={ 2025 }
}
Comments on this paper