ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.07199
54
54

RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion

10 April 2024
Jaidev Shriram
Alex Trevithick
Lingjie Liu
Ravi Ramamoorthi
    DiffM
    3DGS
ArXivPDFHTML
Abstract

We introduce RealmDreamer, a technique for generating forward-facing 3D scenes from text descriptions. Our method optimizes a 3D Gaussian Splatting representation to match complex text prompts using pretrained diffusion models. Our key insight is to leverage 2D inpainting diffusion models conditioned on an initial scene estimate to provide low variance supervision for unknown regions during 3D distillation. In conjunction, we imbue high-fidelity geometry with geometric distillation from a depth diffusion model, conditioned on samples from the inpainting model. We find that the initialization of the optimization is crucial, and provide a principled methodology for doing so. Notably, our technique doesn't require video or multi-view data and can synthesize various high-quality 3D scenes in different styles with complex layouts. Further, the generality of our method allows 3D synthesis from a single image. As measured by a comprehensive user study, our method outperforms all existing approaches, preferred by 88-95%. Project Page:this https URL

View on arXiv
@article{shriram2025_2404.07199,
  title={ RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion },
  author={ Jaidev Shriram and Alex Trevithick and Lingjie Liu and Ravi Ramamoorthi },
  journal={arXiv preprint arXiv:2404.07199},
  year={ 2025 }
}
Comments on this paper