ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.00574
65
25

VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling

31 December 2024
Xinhao Li
Yi Wang
Jiashuo Yu
Xiangyu Zeng
Yuhan Zhu
Haian Huang
Jianfei Gao
Kunchang Li
Yinan He
Chenting Wang
Yu Qiao
Yali Wang
L. Wang
    VLM
ArXivPDFHTML
Abstract

Long-context video modeling is critical for multimodal large language models (MLLMs), enabling them to process movies, online video streams, and so on. Despite its advances, handling long videos remains challenging due to the difficulty in efficiently understanding the extremely long video context. This paper aims to address this issue from aspects of model architecture, training data, training strategy and evaluation benchmark. First, we propose a novel Hierarchical video token Compression (HiCo) method, which leverages visual redundancy in long videos to compress long video context from Clip-level to Video-level, reducing the computation significantly while preserving essential details, achieving an extreme compression ratio of approximately 1/50 with almost no performance loss. Second, we introduce a multi-stage short-to-long learning scheme, a large-scale dataset of real-world long videos named LongVid, and a challenging ``Multi-Hop Needle-In-A-Video-Haystack'' benchmark. Finally, we build a powerful video MLLM named VideoChat-Flash, which shows a leading performance on both mainstream long and short video benchmarks at the 2B and 7B model scale. It first gets 99.1% accuracy over 10,000 frames in NIAH among open-source models.

View on arXiv
@article{li2025_2501.00574,
  title={ VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling },
  author={ Xinhao Li and Yi Wang and Jiashuo Yu and Xiangyu Zeng and Yuhan Zhu and Haian Huang and Jianfei Gao and Kunchang Li and Yinan He and Chenting Wang and Yu Qiao and Yali Wang and Limin Wang },
  journal={arXiv preprint arXiv:2501.00574},
  year={ 2025 }
}
Comments on this paper