18
0

CMD: Controllable Multiview Diffusion for 3D Editing and Progressive Generation

Abstract

Recently, 3D generation methods have shown their powerful ability to automate 3D model creation. However, most 3D generation methods only rely on an input image or a text prompt to generate a 3D model, which lacks the control of each component of the generated 3D model. Any modifications of the input image lead to an entire regeneration of the 3D models. In this paper, we introduce a new method called CMD that generates a 3D model from an input image while enabling flexible local editing of each component of the 3D model. In CMD, we formulate the 3D generation as a conditional multiview diffusion model, which takes the existing or known parts as conditions and generates the edited or added components. This conditional multiview diffusion model not only allows the generation of 3D models part by part but also enables local editing of 3D models according to the local revision of the input image without changing other 3D parts. Extensive experiments are conducted to demonstrate that CMD decomposes a complex 3D generation task into multiple components, improving the generation quality. Meanwhile, CMD enables efficient and flexible local editing of a 3D model by just editing one rendered image.

View on arXiv
@article{li2025_2505.07003,
  title={ CMD: Controllable Multiview Diffusion for 3D Editing and Progressive Generation },
  author={ Peng Li and Suizhi Ma and Jialiang Chen and Yuan Liu and Chongyi Zhang and Wei Xue and Wenhan Luo and Alla Sheffer and Wenping Wang and Yike Guo },
  journal={arXiv preprint arXiv:2505.07003},
  year={ 2025 }
}
Comments on this paper