StreamMind: Unlocking Full Frame Rate Streaming Video Dialogue through Event-Gated Cognition

With the rise of real-world human-AI interaction applications, such as AI assistants, the need for Streaming Video Dialogue is critical. To address this need, we introduce StreamMind, a video LLM framework that achieves ultra-FPS streaming video processing (100 fps on a single A100) and enables proactive, always-on responses in real time, without explicit user intervention.To solve the key challenge of the contradiction between linear video streaming speed and quadratic transformer computation cost, we propose a novel perception-cognition interleaving paradigm named 'évent-gated LLM invocation'', in contrast to the existing per-time-step LLM invocation. By introducing a Cognition Gate network between the video encoder and the LLM, LLM is only invoked when relevant events occur. To realize the event feature extraction with constant cost, we propose Event-Preserving Feature Extractor (EPFE) based on state-space method, generating a single perception token for spatiotemporal features. These techniques enable the video LLM with full-FPS perception and real-time cognition response.Experiments on Ego4D and SoccerNet streaming tasks, as well as standard offline benchmarks, demonstrate state-of-the-art performance in both model capability and real-time efficiency, paving the way for ultra-high-FPS applications, such as Game AI and interactive media. The code and data is available atthis https URL.
View on arXiv@article{ding2025_2503.06220, title={ StreamMind: Unlocking Full Frame Rate Streaming Video Dialogue through Event-Gated Cognition }, author={ Xin Ding and Hao Wu and Yifan Yang and Shiqi Jiang and Donglin Bai and Zhibo Chen and Ting Cao }, journal={arXiv preprint arXiv:2503.06220}, year={ 2025 } }