Wonderland: Navigating 3D Scenes from a Single Image

How can one efficiently generate high-quality, wide-scope 3D scenes from arbitrary single images? Existing methods suffer several drawbacks, such as requiring multi-view data, time-consuming per-scene optimization, distorted geometry in occluded areas, and low visual quality in backgrounds. Our novel 3D scene reconstruction pipeline overcomes these limitations to tackle the aforesaid challenge. Specifically, we introduce a large-scale reconstruction model that leverages latents from a video diffusion model to predict 3D Gaussian Splattings of scenes in a feed-forward manner. The video diffusion model is designed to create videos precisely following specified camera trajectories, allowing it to generate compressed video latents that encode multi-view information while maintaining 3D consistency. We train the 3D reconstruction model to operate on the video latent space with a progressive learning strategy, enabling the efficient generation of high-quality, wide-scope, and generic 3D scenes. Extensive evaluations across various datasets affirm that our model significantly outperforms existing single-view 3D scene generation methods, especially with out-of-domain images. Thus, we demonstrate for the first time that a 3D reconstruction model can effectively be built upon the latent space of a diffusion model in order to realize efficient 3D scene generation.
View on arXiv@article{liang2025_2412.12091, title={ Wonderland: Navigating 3D Scenes from a Single Image }, author={ Hanwen Liang and Junli Cao and Vidit Goel and Guocheng Qian and Sergei Korolev and Demetri Terzopoulos and Konstantinos N. Plataniotis and Sergey Tulyakov and Jian Ren }, journal={arXiv preprint arXiv:2412.12091}, year={ 2025 } }