Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model

Video understanding models often struggle with high computational requirements, extensive parameter counts, and slow inference speed, making them inefficient for practical use. To tackle these challenges, we propose Mobile-VideoGPT, an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models (LMMs), Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM), enabling real-time throughput. To further improve efficiency, we present an Attention-Based Frame Scoring mechanism to select the key-frames, along with an efficient token projector that prunes redundant visual tokens and preserves essential contextual cues. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PercepTest). Our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter models by 6 points on average with 40% fewer parameters and more than 2x higher throughput. Our code and models are publicly available at:this https URL.
View on arXiv@article{shaker2025_2503.21782, title={ Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model }, author={ Abdelrahman Shaker and Muhammad Maaz and Chenhui Gou and Hamid Rezatofighi and Salman Khan and Fahad Shahbaz Khan }, journal={arXiv preprint arXiv:2503.21782}, year={ 2025 } }