17
v1v2 (latest)

DR2^2Seg: Decomposed Two-Stage Rollouts for Efficient Reasoning Segmentation in Multimodal Large Language Models

Yulin He
Wei Chen
Zhikang Jian
Tianhang Guo
Wenjuan Zhou
Minglong Li
Shaowu Yang
Wenjing Yang
Main:8 Pages
7 Figures
Bibliography:2 Pages
10 Tables
Appendix:8 Pages
Abstract

Reasoning segmentation is an emerging vision-language task that requires reasoning over intricate text queries to precisely segment objects. However, existing methods typically suffer from overthinking, generating verbose reasoning chains that interfere with object localization in multimodal large language models (MLLMs). To address this issue, we propose DR2^2Seg, a self-rewarding framework that improves both reasoning efficiency and segmentation accuracy without requiring extra thinking supervision. DR2^2Seg employs a two-stage rollout strategy that decomposes reasoning segmentation into multimodal reasoning and referring segmentation. In the first stage, the model generates a self-contained description that explicitly specifies the target object. In the second stage, this description replaces the original complex query to verify its self-containment. Based on this design, two self-rewards are introduced to mitigate overthinking and the associated attention dispersion. Extensive experiments conducted on 3B and 7B variants of Qwen2.5-VL, as well as on both SAM2 and SAM3, demonstrate that DR2^2Seg consistently improves reasoning efficiency and overall segmentation accuracy.

View on arXiv
Comments on this paper