14
0

Symbolically-Guided Visual Plan Inference from Uncurated Video Data

Abstract

Visual planning, by offering a sequence of intermediate visual subgoals to a goal-conditioned low-level policy, achieves promising performance on long-horizon manipulation tasks. To obtain the subgoals, existing methods typically resort to video generation models but suffer from model hallucination and computational cost. We present Vis2Plan, an efficient, explainable and white-box visual planning framework powered by symbolic guidance. From raw, unlabeled play data, Vis2Plan harnesses vision foundation models to automatically extract a compact set of task symbols, which allows building a high-level symbolic transition graph for multi-goal, multi-stage planning. At test time, given a desired task goal, our planner conducts planning at the symbolic level and assembles a sequence of physically consistent intermediate sub-goal images grounded by the underlying symbolic representation. Our Vis2Plan outperforms strong diffusion video generation-based visual planners by delivering 53\% higher aggregate success rate in real robot settings while generating visual plans 35×\times faster. The results indicate that Vis2Plan is able to generate physically consistent image goals while offering fully inspectable reasoning steps.

View on arXiv
@article{yang2025_2505.08444,
  title={ Symbolically-Guided Visual Plan Inference from Uncurated Video Data },
  author={ Wenyan Yang and Ahmet Tikna and Yi Zhao and Yuying Zhang and Luigi Palopoli and Marco Roveri and Joni Pajarinen },
  journal={arXiv preprint arXiv:2505.08444},
  year={ 2025 }
}
Comments on this paper