85
5

Long Video Understanding with Learnable Retrieval in Video-Language Models

Abstract

The remarkable natural language understanding, reasoning, and generation capabilities of large language models (LLMs) have made them attractive for application to video understanding, utilizing video tokens as contextual input. However, employing LLMs for long video understanding presents significant challenges. The extensive number of video tokens leads to considerable computational costs for LLMs while using aggregated tokens results in loss of vision details. Moreover, the presence of abundant question-irrelevant tokens introduces noise to the video reasoning process. To address these issues, we introduce a simple yet effective learnable retrieval-based video-language model (R-VLM) for efficient long video understanding. Specifically, given a question (query) and a long video, our model identifies and selects the most relevant K video chunks and uses their associated visual tokens to serve as context for the LLM inference. This effectively reduces the number of video tokens, eliminates noise interference, and enhances system performance. We achieve this by incorporating a learnable lightweight MLP block to facilitate the efficient retrieval of question-relevant chunks, through the end-to-end training of our video-language model with a proposed soft matching loss. Our experimental results on multiple zero-shot video question answering datasets validate the effectiveness of our framework for comprehending long videos.

View on arXiv
@article{xu2025_2312.04931,
  title={ Long Video Understanding with Learnable Retrieval in Video-Language Models },
  author={ Jiaqi Xu and Cuiling Lan and Wenxuan Xie and Xuejin Chen and Yan Lu },
  journal={arXiv preprint arXiv:2312.04931},
  year={ 2025 }
}
Comments on this paper