ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11089
46
0

EmbodiedVSR: Dynamic Scene Graph-Guided Chain-of-Thought Reasoning for Visual Spatial Tasks

14 March 2025
Yi Zhang
Qiang Zhang
Xiaozhu Ju
Z. Liu
Jilei Mao
Jingkai Sun
Jintao Wu
Shixiong Gao
Shihan Cai
Z. Qin
Linkai Liang
Jiaxu Wang
Yiqun Duan
Jiahang Cao
Renjing Xu
Jian Tang
    LM&Ro
    LRM
ArXivPDFHTML
Abstract

While multimodal large language models (MLLMs) have made groundbreaking progress in embodied intelligence, they still face significant challenges in spatial reasoning for complex long-horizon tasks. To address this gap, we propose EmbodiedVSR (Embodied Visual Spatial Reasoning), a novel framework that integrates dynamic scene graph-guided Chain-of-Thought (CoT) reasoning to enhance spatial understanding for embodied agents. By explicitly constructing structured knowledge representations through dynamic scene graphs, our method enables zero-shot spatial reasoning without task-specific fine-tuning. This approach not only disentangles intricate spatial relationships but also aligns reasoning steps with actionable environmental dynamics. To rigorously evaluate performance, we introduce the eSpatial-Benchmark, a comprehensive dataset including real-world embodied scenarios with fine-grained spatial annotations and adaptive task difficulty levels. Experiments demonstrate that our framework significantly outperforms existing MLLM-based methods in accuracy and reasoning coherence, particularly in long-horizon tasks requiring iterative environment interaction. The results reveal the untapped potential of MLLMs for embodied intelligence when equipped with structured, explainable reasoning mechanisms, paving the way for more reliable deployment in real-world spatial applications. The codes and datasets will be released soon.

View on arXiv
@article{zhang2025_2503.11089,
  title={ EmbodiedVSR: Dynamic Scene Graph-Guided Chain-of-Thought Reasoning for Visual Spatial Tasks },
  author={ Yi Zhang and Qiang Zhang and Xiaozhu Ju and Zhaoyang Liu and Jilei Mao and Jingkai Sun and Jintao Wu and Shixiong Gao and Shihan Cai and Zhiyuan Qin and Linkai Liang and Jiaxu Wang and Yiqun Duan and Jiahang Cao and Renjing Xu and Jian Tang },
  journal={arXiv preprint arXiv:2503.11089},
  year={ 2025 }
}
Comments on this paper