Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs

Video understanding is a crucial next step for multimodal large language models (MLLMs). Various benchmarks are introduced for better evaluating the MLLMs. Nevertheless, current video benchmarks are still inefficient for evaluating video models during iterative development due to the high cost of constructing datasets and the difficulty in isolating specific skills. In this paper, we propose VideoNIAH (Video Needle In A Haystack), a benchmark construction framework through synthetic video generation. VideoNIAH decouples video content from their query-responses by inserting unrelated visual ñeedles' into original videos. The framework automates the generation of query-response pairs using predefined rules, minimizing manual labor. The queries focus on specific aspects of video understanding, enabling more skill-specific evaluations. The separation between video content and the queries also allow for increased video variety and evaluations across different lengths. Utilizing VideoNIAH, we compile a video benchmark VNBench, which includes tasks such as retrieval, ordering, and counting to evaluate three key aspects of video understanding: temporal perception, chronological ordering, and spatio-temporal coherence. We conduct a comprehensive evaluation of both proprietary and open-source models, uncovering significant differences in their video understanding capabilities across various tasks. Additionally, we perform an in-depth analysis of the test results and model configurations. Based on these findings, we provide some advice for improving video MLLM training, offering valuable insights to guide future research and model development. The code and data are available atthis https URL.
View on arXiv@article{zhao2025_2406.09367, title={ Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs }, author={ Zijia Zhao and Haoyu Lu and Yuqi Huo and Yifan Du and Tongtian Yue and Longteng Guo and Bingning Wang and Weipeng Chen and Jing Liu }, journal={arXiv preprint arXiv:2406.09367}, year={ 2025 } }