13

TangramPuzzle: Evaluating Multimodal Large Language Models with Compositional Spatial Reasoning

Daixian Liu
Jiayi Kuang
Yinghui Li
Yangning Li
Di Yin
Haoyu Cao
Xing Sun
Ying Shen
Hai-Tao Zheng
Liang Lin
Philip S. Yu
Main:11 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Appendix:3 Pages
Abstract

Multimodal Large Language Models (MLLMs) have achieved remarkable progress in visual recognition and semantic understanding. Nevertheless, their ability to perform precise compositional spatial reasoning remains largely unexplored. Existing benchmarks often involve relatively simple tasks and rely on semantic approximations or coarse relative positioning, while their evaluation metrics are typically limited and lack rigorous mathematical formulations. To bridge this gap, we introduce TangramPuzzle, a geometry-grounded benchmark designed to evaluate compositional spatial reasoning through the lens of the classic Tangram game. We propose the Tangram Construction Expression (TCE), a symbolic geometric framework that grounds tangram assemblies in exact, machine-verifiable coordinate specifications, to mitigate the ambiguity of visual approximation. We design two complementary tasks: Outline Prediction, which demands inferring global shapes from local components, and End-to-End Code Generation, which requires solving inverse geometric assembly problems. We conduct extensive evaluation experiments on advanced open-source and proprietary models, revealing an interesting insight: MLLMs tend to prioritize matching the target silhouette while neglecting geometric constraints, leading to distortions or deformations of the pieces.

View on arXiv
Comments on this paper