133

Intent at a Glance: Gaze-Guided Robotic Manipulation via Foundation Models

Tracey Yee Hsin Tay
Xu Yan
Jonathan Ouyang
Daniel Wu
William Jiang
Jonathan Kao
Yuchen Cui
Main:8 Pages
11 Figures
Bibliography:3 Pages
3 Tables
Appendix:5 Pages
Abstract

Designing intuitive interfaces for robotic control remains a central challenge in enabling effective human-robot interaction, particularly in assistive care settings. Eye gaze offers a fast, non-intrusive, and intent-rich input modality, making it an attractive channel for conveying user goals. In this work, we present GAMMA (Gaze Assisted Manipulation for Modular Autonomy), a system that leverages ego-centric gaze tracking and a vision-language model to infer user intent and autonomously execute robotic manipulation tasks. By contextualizing gaze fixations within the scene, the system maps visual attention to high-level semantic understanding, enabling skill selection and parameterization without task-specific training. We evaluate GAMMA on a range of table-top manipulation tasks and compare it against baseline gaze-based control without reasoning. Results demonstrate that GAMMA provides robust, intuitive, and generalizable control, highlighting the potential of combining foundation models and gaze for natural and scalable robot autonomy. Project website:this https URL

View on arXiv
Comments on this paper