Shot-by-Shot: Film-Grammar-Aware Training-Free Audio Description Generation

Our objective is the automatic generation of Audio Descriptions (ADs) for edited video material, such as movies and TV series. To achieve this, we propose a two-stage framework that leverages "shots" as the fundamental units of video understanding. This includes extending temporal context to neighbouring shots and incorporating film grammar devices, such as shot scales and thread structures, to guide AD generation. Our method is compatible with both open-source and proprietary Visual-Language Models (VLMs), integrating expert knowledge from add-on modules without requiring additional training of the VLMs. We achieve state-of-the-art performance among all prior training-free approaches and even surpass fine-tuned methods on several benchmarks. To evaluate the quality of predicted ADs, we introduce a new evaluation measure -- an action score -- specifically targeted to assessing this important aspect of AD. Additionally, we propose a novel evaluation protocol that treats automatic frameworks as AD generation assistants and asks them to generate multiple candidate ADs for selection.
View on arXiv@article{xie2025_2504.01020, title={ Shot-by-Shot: Film-Grammar-Aware Training-Free Audio Description Generation }, author={ Junyu Xie and Tengda Han and Max Bain and Arsha Nagrani and Eshika Khandelwal and Gül Varol and Weidi Xie and Andrew Zisserman }, journal={arXiv preprint arXiv:2504.01020}, year={ 2025 } }