ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05152
21
0

PanoDreamer: Consistent Text to 360-Degree Scene Generation

7 April 2025
Zhexiao Xiong
Z. Chen
Zhong Li
Yi Tian Xu
Nathan Jacobs
    3DGS
    VGen
ArXivPDFHTML
Abstract

Automatically generating a complete 3D scene from a text description, a reference image, or both has significant applications in fields like virtual reality and gaming. However, current methods often generate low-quality textures and inconsistent 3D structures. This is especially true when extrapolating significantly beyond the field of view of the reference image. To address these challenges, we propose PanoDreamer, a novel framework for consistent, 3D scene generation with flexible text and image control. Our approach employs a large language model and a warp-refine pipeline, first generating an initial set of images and then compositing them into a 360-degree panorama. This panorama is then lifted into 3D to form an initial point cloud. We then use several approaches to generate additional images, from different viewpoints, that are consistent with the initial point cloud and expand/refine the initial point cloud. Given the resulting set of images, we utilize 3D Gaussian Splatting to create the final 3D scene, which can then be rendered from different viewpoints. Experiments demonstrate the effectiveness of PanoDreamer in generating high-quality, geometrically consistent 3D scenes.

View on arXiv
@article{xiong2025_2504.05152,
  title={ PanoDreamer: Consistent Text to 360-Degree Scene Generation },
  author={ Zhexiao Xiong and Zhang Chen and Zhong Li and Yi Xu and Nathan Jacobs },
  journal={arXiv preprint arXiv:2504.05152},
  year={ 2025 }
}
Comments on this paper