Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation

Vision-Language Models (VLMs) often struggle to balance visual and textual information when summarizing complex multimodal inputs, such as entire TV show episodes. In this paper, we propose a zero-shot video-to-text summarization approach that builds its own screenplay representation of an episode, effectively integrating key video moments, dialogue, and character information into a unified document. Unlike previous approaches, we simultaneously generate screenplays and name the characters in zero-shot, using only the audio, video, and transcripts as input. Additionally, we highlight that existing summarization metrics can fail to assess the multimodal content in summaries. To address this, we introduce MFactSum, a multimodal metric that evaluates summaries with respect to both vision and text modalities. Using MFactSum, we evaluate our screenplay summaries on the SummScreen3D dataset, demonstrating superiority against state-of-the-art VLMs such as Gemini 1.5 by generating summaries containing 20% more relevant visual information while requiring 75% less of the video as input.
View on arXiv@article{pennec2025_2505.06594, title={ Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation }, author={ Galann Pennec and Zhengyuan Liu and Nicholas Asher and Philippe Muller and Nancy F. Chen }, journal={arXiv preprint arXiv:2505.06594}, year={ 2025 } }