169
v1v2 (latest)

ART: Articulated Reconstruction Transformer

Zizhang Li
Cheng Zhang
Zhengqin Li
Henry Howard-Jenkins
Zhaoyang Lv
Chen Geng
Jiajun Wu
Richard Newcombe
Jakob Engel
Zhao Dong
Main:8 Pages
7 Figures
Bibliography:4 Pages
3 Tables
Appendix:2 Pages
Abstract

We introduce ART, Articulated Reconstruction Transformer -- a category-agnostic, feed-forward model that reconstructs complete 3D articulated objects from only sparse, multi-state RGB images. Previous methods for articulated object reconstruction either rely on slow optimization with fragile cross-state correspondences or use feed-forward models limited to specific object categories. In contrast, ART treats articulated objects as assemblies of rigid parts, formulating reconstruction as part-based prediction. Our newly designed transformer architecture maps sparse image inputs to a set of learnable part slots, from which ART jointly decodes unified representations for individual parts, including their 3D geometry, texture, and explicit articulation parameters. The resulting reconstructions are physically interpretable and readily exportable for simulation. Trained on a large-scale, diverse dataset with per-part supervision, and evaluated across diverse benchmarks, ART achieves significant improvements over existing baselines and establishes a new state of the art for articulated object reconstruction from image inputs.

View on arXiv
Comments on this paper