ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.12957
27
16

3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion

19 September 2024
Zhaoxi Chen
Jiaxiang Tang
Yuhao Dong
Ziang Cao
Fangzhou Hong
Yushi Lan
Tengfei Wang
Haozhe Xie
Tong Wu
Shunsuke Saito
Liang Pan
Dahua Lin
Ziwei Liu
ArXivPDFHTML
Abstract

The increasing demand for high-quality 3D assets across various industries necessitates efficient and automated 3D content creation. Despite recent advancements in 3D generative models, existing methods still face challenges with optimization speed, geometric fidelity, and the lack of assets for physically based rendering (PBR). In this paper, we introduce 3DTopia-XL, a scalable native 3D generative model designed to overcome these limitations. 3DTopia-XL leverages a novel primitive-based 3D representation, PrimX, which encodes detailed shape, albedo, and material field into a compact tensorial format, facilitating the modeling of high-resolution geometry with PBR assets. On top of the novel representation, we propose a generative framework based on Diffusion Transformer (DiT), which comprises 1) Primitive Patch Compression, 2) and Latent Primitive Diffusion. 3DTopia-XL learns to generate high-quality 3D assets from textual or visual inputs. We conduct extensive qualitative and quantitative experiments to demonstrate that 3DTopia-XL significantly outperforms existing methods in generating high-quality 3D assets with fine-grained textures and materials, efficiently bridging the quality gap between generative models and real-world applications.

View on arXiv
@article{chen2025_2409.12957,
  title={ 3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion },
  author={ Zhaoxi Chen and Jiaxiang Tang and Yuhao Dong and Ziang Cao and Fangzhou Hong and Yushi Lan and Tengfei Wang and Haozhe Xie and Tong Wu and Shunsuke Saito and Liang Pan and Dahua Lin and Ziwei Liu },
  journal={arXiv preprint arXiv:2409.12957},
  year={ 2025 }
}
Comments on this paper