ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.01767
193
0
v1v2 (latest)

Wonder3D++: Cross-domain Diffusion for High-fidelity 3D Generation from a Single Image

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
3 November 2025
Yuxiao Yang
Xiao-Xiao Long
Zhiyang Dou
Cheng Lin
Yuan Liu
Qingsong Yan
Y. Ma
Haoqian Wang
Zhiqiang Wu
Wei Yin
ArXiv (abs)PDFHTML
Main:11 Pages
21 Figures
Bibliography:3 Pages
Appendix:7 Pages
Abstract

In this work, we introduce \textbf{Wonder3D++}, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry. In contrast, certain works directly produce 3D information via fast network inferences, but their results are often of low quality and lack geometric details. To holistically improve the quality, consistency, and efficiency of single-view reconstruction tasks, we propose a cross-domain diffusion model that generates multi-view normal maps and the corresponding color images. To ensure the consistency of generation, we employ a multi-view cross-domain attention mechanism that facilitates information exchange across views and modalities. Lastly, we introduce a cascaded 3D mesh extraction algorithm that drives high-quality surfaces from the multi-view 2D representations in only about 333 minute in a coarse-to-fine manner. Our extensive evaluations demonstrate that our method achieves high-quality reconstruction results, robust generalization, and good efficiency compared to prior works. Code available atthis https URL.

View on arXiv
Comments on this paper