ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.21080
48
3

Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model

31 December 2024
Y. Huang
Jilan Xu
Baoqi Pei
Yuping He
Guo Chen
Lijin Yang
Xinyuan Chen
Yaohui Wang
Zheng Nie
J. Liu
Guoshun Fan
D. Lin
Fang Fang
Kunpeng Li
C. Yuan
Y. Wang
Yu Qiao
L. Wang
ArXivPDFHTML
Abstract

We introduce Vinci, a real-time embodied smart assistant built upon an egocentric vision-language model. Designed for deployment on portable devices such as smartphones and wearable cameras, Vinci operates in an "always on" mode, continuously observing the environment to deliver seamless interaction and assistance. Users can wake up the system and engage in natural conversations to ask questions or seek assistance, with responses delivered through audio for hands-free convenience. With its ability to process long video streams in real-time, Vinci can answer user queries about current observations and historical context while also providing task planning based on past interactions. To further enhance usability, Vinci integrates a video generation module that creates step-by-step visual demonstrations for tasks that require detailed guidance. We hope that Vinci can establish a robust framework for portable, real-time egocentric AI systems, empowering users with contextual and actionable insights. We release the complete implementation for the development of the device in conjunction with a demo web platform to test uploaded videos atthis https URL.

View on arXiv
Comments on this paper