112

Point What You Mean: Visually Grounded Instruction Policy

Hang Yu
Juntu Zhao
Yufeng Liu
Kaiyu Li
Cheng Ma
Di Zhang
Yingdong Hu
Guang Chen
Junyuan Xie
Junliang Guo
Junqiao Zhao
Yang Gao
Main:1 Pages
13 Figures
5 Tables
Appendix:14 Pages
Abstract

Vision-Language-Action (VLA) models align vision and language with embodied control, but their object referring ability remains limited when relying solely on text prompt, especially in cluttered or out-of-distribution (OOD) scenes. In this study, we introduce the Point-VLA, a plug-and-play policy that augments language instructions with explicit visual cues (e.g., bounding boxes) to resolve referential ambiguity and enable precise object-level grounding. To efficiently scale visually grounded datasets, we further develop an automatic data annotation pipeline requiring minimal human effort. We evaluate Point-VLA on diverse real-world referring tasks and observe consistently stronger performance than text-only instruction VLAs, particularly in cluttered or unseen-object scenarios, with robust generalization. These results demonstrate that Point-VLA effectively resolves object referring ambiguity through pixel-level visual grounding, achieving more generalizable embodied control.

View on arXiv
Comments on this paper