300
v1v2 (latest)

Building Egocentric Procedural AI Assistant: Methods, Benchmarks, and Challenges

Main:19 Pages
7 Figures
Bibliography:10 Pages
Abstract

Driven by recent advances in vision-language models (VLMs) and egocentric perception research, the emerging topic of an egocentric procedural AI assistant (EgoProceAssist) is introduced to step-by-step support daily procedural tasks in a first-person view. In this paper, we start by identifying three core tasks in EgoProceAssist: egocentric procedural error detection, egocentric procedural learning, and egocentric procedural question answering, then introduce two enabling dimensions: real-time and streaming video understanding, and proactive interaction in procedural contexts. We define these tasks within a new taxonomy as the EgoProceAssist's essential functions and illustrate how they can be deployed in real-world scenarios for daily activity assistants. Specifically, our work encompasses a comprehensive review of current techniques, relevant datasets, and evaluation metrics across these five core areas. To clarify the gap between the proposed EgoProceAssist and existing VLM-based assistants, we conduct novel experiments to provide a comprehensive evaluation of representative VLM-based methods. Through these findings and our technical analysis, we discuss the challenges ahead and suggest future research directions. Furthermore, an exhaustive list of this study is publicly available in an active repository that continuously collects the latest work:this https URL.

View on arXiv
Comments on this paper