AVC-DPO: Aligned Video Captioning via Direct Preference Optimization

Although video multimodal large language models (video MLLMs) have achieved substantial progress in video captioning tasks, it remains challenging to adjust the focal emphasis of video captions according to human preferences. To address this limitation, we propose Aligned Video Captioning via Direct Preference Optimization (AVC-DPO), a post-training framework designed to enhance captioning capabilities in video MLLMs through preference alignment. Our approach designs enhanced prompts that specifically target temporal dynamics and spatial information-two key factors that humans care about when watching a video-thereby incorporating human-centric preferences. AVC-DPO leverages the same foundation model's caption generation responses under varied prompt conditions to conduct preference-aware training and caption alignment. Using this framework, we have achieved exceptional performance in the LOVE@CVPR'25 Workshop Track 1A: Video Detailed Captioning Challenge, achieving first place on the Video Detailed Captioning (VDC) benchmark according to the VDCSCORE evaluation metric.
View on arXiv@article{tang2025_2507.01492, title={ AVC-DPO: Aligned Video Captioning via Direct Preference Optimization }, author={ Jiyang Tang and Hengyi Li and Yifan Du and Wayne Xin Zhao }, journal={arXiv preprint arXiv:2507.01492}, year={ 2025 } }