Contextual AD Narration with Interleaved Multimodal Sequence

The Audio Description (AD) task aims to generate descriptions of visual elements for visually impaired individuals to help them access long-form video content, like movies. With video feature, text, character bank and context information as inputs, the generated ADs are able to correspond to the characters by name and provide reasonable, contextual descriptions to help audience understand the storyline of movie. To achieve this goal, we propose to leverage pre-trained foundation models through a simple and unified framework to generate ADs with interleaved multimodal sequence as input, termed as Uni-AD. To enhance the alignment of features across various modalities with finer granularity, we introduce a simple and lightweight module that maps video features into the textual feature space. Moreover, we also propose a character-refinement module to provide more precise information by identifying the main characters who play more significant roles in the video context. With these unique designs, we further incorporate contextual information and a contrastive loss into our architecture to generate smoother and more contextually appropriate ADs. Experiments on multiple AD datasets show that Uni-AD performs well on AD generation, which demonstrates the effectiveness of our approach. Our code is available at:this https URL.
View on arXiv@article{wang2025_2403.12922, title={ Contextual AD Narration with Interleaved Multimodal Sequence }, author={ Hanlin Wang and Zhan Tong and Kecheng Zheng and Yujun Shen and Limin Wang }, journal={arXiv preprint arXiv:2403.12922}, year={ 2025 } }