52
0

Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?

Abstract

The rise of Large Vision-Language Models (LVLMs) has significantly advanced video understanding. However, efficiently processing long videos remains a challenge due to the ``Sampling Dilemma'': low-density sampling risks missing critical information, while high-density sampling introduces redundancy. To address this issue, we introduce LSDBench, the first benchmark designed to evaluate LVLMs on long-video tasks by constructing high Necessary Sampling Density (NSD) questions, where NSD represents the minimum sampling density required to accurately answer a given question. LSDBench focuses on dense, short-duration actions to rigorously assess the sampling strategies employed by LVLMs. To tackle the challenges posed by high-NSD questions, we propose a novel Reasoning-Driven Hierarchical Sampling (RHS) framework, which combines global localization of question-relevant cues with local dense sampling for precise inference. Additionally, we develop a lightweight Semantic-Guided Frame Selector to prioritize informative frames, enabling RHS to achieve comparable or superior performance with significantly fewer sampled frames. Together, our LSDBench and RHS framework address the unique challenges of high-NSD long-video tasks, setting a new standard for evaluating and improving LVLMs in this domain. Our benchmark and evaluation codes has been released at:this https URL

View on arXiv
@article{qu2025_2503.12496,
  title={ Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma? },
  author={ Tianyuan Qu and Longxiang Tang and Bohao Peng and Senqiao Yang and Bei Yu and Jiaya Jia },
  journal={arXiv preprint arXiv:2503.12496},
  year={ 2025 }
}
Comments on this paper