41
v1v2 (latest)

SAGE: Scalable Agentic 3D Scene Generation for Embodied AI

Hongchi Xia
Xuan Li
Zhaoshuo Li
Qianli Ma
Jiashu Xu
Ming-Yu Liu
Yin Cui
Tsung-Yi Lin
Wei-Chiu Ma
Shenlong Wang
Shuran Song
Fangyin Wei
Main:9 Pages
13 Figures
Bibliography:4 Pages
5 Tables
Appendix:5 Pages
Abstract

Real-world data collection for embodied agents remains costly and unsafe, calling for scalable, realistic, and simulator-ready 3D environments. However, existing scene-generation systems often rely on rule-based or task-specific pipelines, yielding artifacts and physically invalid scenes. We present SAGE, an agentic framework that, given a user-specified embodied task (e.g., "pick up a bowl and place it on the table"), understands the intent and automatically generates simulation-ready environments at scale. The agent couples multiple generators for layout and object composition with critics that evaluate semantic plausibility, visual realism, and physical stability. Through iterative reasoning and adaptive tool selection, it self-refines the scenes until meeting user intent and physical validity. The resulting environments are realistic, diverse, and directly deployable in modern simulators for policy training. Policies trained purely on this data exhibit clear scaling trends and generalize to unseen objects and layouts, demonstrating the promise of simulation-driven scaling for embodied AI. Code, demos, and the SAGE-10k dataset can be found on the project page here:this https URL.

View on arXiv
Comments on this paper