114

DreamComposer++: Empowering Diffusion Models with Multi-View Conditions for 3D Content Generation

Yunhan Yang
Shuo Chen
Yukun Huang
Xiaoyang Wu
Yuan-Chen Guo
Edmund Y. Lam
Hengshuang Zhao
Tong He
Xihui Liu
Main:14 Pages
17 Figures
Bibliography:3 Pages
7 Tables
Abstract

Recent advancements in leveraging pre-trained 2D diffusion models achieve the generation of high-quality novel views from a single in-the-wild image. However, existing works face challenges in producing controllable novel views due to the lack of information from multiple views. In this paper, we present DreamComposer++, a flexible and scalable framework designed to improve current view-aware diffusion models by incorporating multi-view conditions. Specifically, DreamComposer++ utilizes a view-aware 3D lifting module to extract 3D representations of an object from various views. These representations are then aggregated and rendered into the latent features of target view through the multi-view feature fusion module. Finally, the obtained features of target view are integrated into pre-trained image or video diffusion models for novel view synthesis. Experimental results demonstrate that DreamComposer++ seamlessly integrates with cutting-edge view-aware diffusion models and enhances their abilities to generate controllable novel views from multi-view conditions. This advancement facilitates controllable 3D object reconstruction and enables a wide range of applications.

View on arXiv
Comments on this paper