ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.23120
12
0

Enhancing Spatial Reasoning in Multimodal Large Language Models through Reasoning-based Segmentation

1 July 2025
Zhenhua Ning
Zhuotao Tian
Shaoshuai Shi
Guangming Lu
Daojing He
Wenjie Pei
Li Jiang
    LRM
ArXiv (abs)PDFHTML
Main:8 Pages
14 Figures
Bibliography:3 Pages
7 Tables
Appendix:5 Pages
Abstract

Recent advances in point cloud perception have demonstrated remarkable progress in scene understanding through vision-language alignment leveraging large language models (LLMs). However, existing methods may still encounter challenges in handling complex instructions that require accurate spatial reasoning, even if the 3D point cloud data provides detailed spatial cues such as size and position for identifying the targets. To tackle this issue, we propose Relevant Reasoning Segmentation (R2^22S), a reasoning-based segmentation framework. The framework emulates human cognitive processes by decomposing spatial reasoning into two sequential stages: first identifying relevant elements, then processing instructions guided by their associated visual priors. Furthermore, acknowledging the inadequacy of existing datasets in complex reasoning tasks, we introduce 3D ReasonSeg, a reasoning-based segmentation dataset comprising 25,185 training samples and 3,966 validation samples with precise annotations. Both quantitative and qualitative experiments demonstrate that the R2^22S and 3D ReasonSeg effectively endow 3D point cloud perception with stronger spatial reasoning capabilities, and we hope that they can serve as a new baseline and benchmark for future work.

View on arXiv
@article{ning2025_2506.23120,
  title={ Enhancing Spatial Reasoning in Multimodal Large Language Models through Reasoning-based Segmentation },
  author={ Zhenhua Ning and Zhuotao Tian and Shaoshuai Shi and Guangming Lu and Daojing He and Wenjie Pei and Li Jiang },
  journal={arXiv preprint arXiv:2506.23120},
  year={ 2025 }
}
Comments on this paper