ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.12836
23
0

EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room Layout Editing

3 October 2024
Kaizhi Zheng
Xiaotong Chen
Xuehai He
Jing Gu
Linjie Li
Zhengyuan Yang
Kevin Qinghong Lin
Jianfeng Wang
Lijuan Wang
Xin Eric Wang
    KELM
    DiffM
ArXivPDFHTML
Abstract

Given the steep learning curve of professional 3D software and the time-consuming process of managing large 3D assets, language-guided 3D scene editing has significant potential in fields such as virtual reality, augmented reality, and gaming. However, recent approaches to language-guided 3D scene editing either require manual interventions or focus only on appearance modifications without supporting comprehensive scene layout changes. In response, we propose EditRoom, a unified framework capable of executing a variety of layout edits through natural language commands, without requiring manual intervention. Specifically, EditRoom leverages Large Language Models (LLMs) for command planning and generates target scenes using a diffusion-based method, enabling six types of edits: rotate, translate, scale, replace, add, and remove. To address the lack of data for language-guided 3D scene editing, we have developed an automatic pipeline to augment existing 3D scene synthesis datasets and introduced EditRoom-DB, a large-scale dataset with 83k editing pairs, for training and evaluation. Our experiments demonstrate that our approach consistently outperforms other baselines across all metrics, indicating higher accuracy and coherence in language-guided scene layout editing.

View on arXiv
@article{zheng2025_2410.12836,
  title={ EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room Layout Editing },
  author={ Kaizhi Zheng and Xiaotong Chen and Xuehai He and Jing Gu and Linjie Li and Zhengyuan Yang and Kevin Lin and Jianfeng Wang and Lijuan Wang and Xin Eric Wang },
  journal={arXiv preprint arXiv:2410.12836},
  year={ 2025 }
}
Comments on this paper