An Egocentric Vision-Language Model based Portable Real-time Smart Assistant
We present Vinci, a vision-language system designed to provide real-time, comprehensive AI assistance on portable devices. At its core, Vinci leverages EgoVideo-VL, a novel model that integrates an egocentric vision foundation model with a large language model (LLM), enabling advanced functionalities such as scene understanding, temporal grounding, video summarization, and future planning. To enhance its utility, Vinci incorporates a memory module for processing long video streams in real time while retaining contextual history, a generation module for producing visual action demonstrations, and a retrieval module that bridges egocentric and third-person perspectives to provide relevant how-to videos for skill acquisition. Unlike existing systems that often depend on specialized hardware, Vinci is hardware-agnostic, supporting deployment across a wide range of devices, including smartphones and wearable cameras. In our experiments, we first demonstrate the superior performance of EgoVideo-VL on multiple public benchmarks, showcasing its vision-language reasoning and contextual understanding capabilities. We then conduct a series of user studies to evaluate the real-world effectiveness of Vinci, highlighting its adaptability and usability in diverse scenarios. We hope Vinci can establish a new framework for portable, real-time egocentric AI systems, empowering users with contextual and actionable insights. Including the frontend, backend, and models, all codes of Vinci are available atthis https URL.
View on arXiv@article{huang2025_2503.04250, title={ An Egocentric Vision-Language Model based Portable Real-time Smart Assistant }, author={ Yifei Huang and Jilan Xu and Baoqi Pei and Yuping He and Guo Chen and Mingfang Zhang and Lijin Yang and Zheng Nie and Jinyao Liu and Guoshun Fan and Dechen Lin and Fang Fang and Kunpeng Li and Chang Yuan and Xinyuan Chen and Yaohui Wang and Yali Wang and Yu Qiao and Limin Wang }, journal={arXiv preprint arXiv:2503.04250}, year={ 2025 } }