TIME: Temporal-sensitive Multi-dimensional Instruction Tuning and Benchmarking for Video-LLMs
Video large language models have achieved remarkable performance in tasks such as video question answering, however, their temporal understanding remains suboptimal. To address this limitation, we curate a dedicated instruction fine-tuning dataset that focuses on enhancing temporal comprehension across five key dimensions. In order to reduce reliance on costly temporal annotations, we introduce a multi-task prompt fine-tuning approach that seamlessly integrates temporal-sensitive tasks into existing instruction datasets without requiring additional annotations. Furthermore, we develop a novel benchmark for temporal-sensitive video understanding that not only fills the gaps in dimension coverage left by existing benchmarks but also rigorously filters out potential shortcuts, ensuring a more accurate evaluation. Extensive experimental results demonstrate that our approach significantly enhances the temporal understanding of video-LLMs while avoiding reliance on shortcuts.
View on arXiv@article{wang2025_2503.09994, title={ TIME: Temporal-sensitive Multi-dimensional Instruction Tuning and Benchmarking for Video-LLMs }, author={ Yunxiao Wang and Meng Liu and Rui Shao and Haoyu Zhang and Bin Wen and Fan Yang and Tingting Gao and Di Zhang and Liqiang Nie }, journal={arXiv preprint arXiv:2503.09994}, year={ 2025 } }