73
6

OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference

Abstract

Recent advancements in open-source multi-modal large language models (MLLMs) have primarily focused on enhancing foundational capabilities, leaving a significant gap in human preference alignment. This paper introduces OmniAlign-V, a comprehensive dataset of 200K high-quality training samples featuring diverse images, complex questions, and varied response formats to improve MLLMs' alignment with human preferences. We also present MM-AlignBench, a human-annotated benchmark specifically designed to evaluate MLLMs' alignment with human values. Experimental results show that finetuning MLLMs with OmniAlign-V, using Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO), significantly enhances human preference alignment while maintaining or enhancing performance on standard VQA benchmarks, preserving their fundamental capabilities. Our datasets, benchmark, code and checkpoints have been released atthis https URL.

View on arXiv
@article{zhao2025_2502.18411,
  title={ OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference },
  author={ Xiangyu Zhao and Shengyuan Ding and Zicheng Zhang and Haian Huang and Maosong Cao and Weiyun Wang and Jiaqi Wang and Xinyu Fang and Wenhai Wang and Guangtao Zhai and Haodong Duan and Hua Yang and Kai Chen },
  journal={arXiv preprint arXiv:2502.18411},
  year={ 2025 }
}
Comments on this paper