CFSum: A Transformer-Based Multi-Modal Video Summarization Framework With Coarse-Fine Fusion
Video summarization, by selecting the most informative and/or user-relevant parts of original videos to create concise summary videos, has high research value and consumer demand in today's video proliferation era. Multi-modal video summarization that accomodates user input has become a research hotspot. However, current multi-modal video summarization methods suffer from two limitations. First, existing methods inadequately fuse information from different modalities and cannot effectively utilize modality-unique features. Second, most multi-modal methods focus on video and text modalities, neglecting the audio modality, despite the fact that audio information can be very useful in certain types of videos. In this paper we propose CFSum, a transformer-based multi-modal video summarization framework with coarse-fine fusion. CFSum exploits video, text, and audio modal features as input, and incorporates a two-stage transformer-based feature fusion framework to fully utilize modality-unique information. In the first stage, multi-modal features are fused simultaneously to perform initial coarse-grained feature fusion, then, in the second stage, video and audio features are explicitly attended with the text representation yielding more fine-grained information interaction. The CFSum architecture gives equal importance to each modality, ensuring that each modal feature interacts deeply with the other modalities. Our extensive comparative experiments against prior methods and ablation studies on various datasets confirm the effectiveness and superiority of CFSum.
View on arXiv@article{guo2025_2503.00364, title={ CFSum: A Transformer-Based Multi-Modal Video Summarization Framework With Coarse-Fine Fusion }, author={ Yaowei Guo and Jiazheng Xing and Xiaojun Hou and Shuo Xin and Juntao Jiang and Demetri Terzopoulos and Chenfanfu Jiang and Yong Liu }, journal={arXiv preprint arXiv:2503.00364}, year={ 2025 } }