6

Shape of Thought: Progressive Object Assembly via Visual Chain-of-Thought

Yu Huo
Siyu Zhang
Kun Zeng
Haoyue Liu
Owen Lee
Junlin Chen
Yuquan Lu
Yifu Guo
Yaodong Liang
Xiaoying Tang
Main:8 Pages
22 Figures
Bibliography:3 Pages
9 Tables
Appendix:30 Pages
Abstract

Multimodal models for text-to-image generation have achieved strong visual fidelity, yet they remain brittle under compositional structural constraints-notably generative numeracy, attribute binding, and part-level relations. To address these challenges, we propose Shape-of-Thought (SoT), a visual CoT framework that enables progressive shape assembly via coherent 2D projections without external engines at inference time. SoT trains a unified multimodal autoregressive model to generate interleaved textual plans and rendered intermediate states, helping the model capture shape-assembly logic without producing explicit geometric representations. To support this paradigm, we introduce SoT-26K, a large-scale dataset of grounded assembly traces derived from part-based CAD hierarchies, and T2S-CompBench, a benchmark for evaluating structural integrity and trace faithfulness. Fine-tuning on SoT-26K achieves 88.4% on component numeracy and 84.8% on structural topology, outperforming text-only baselines by around 20%. SoT establishes a new paradigm for transparent, process-supervised compositional generation. The code is available atthis https URL. The SoT-26K dataset will be released upon acceptance.

View on arXiv
Comments on this paper