Current methods for 3D generation still fall short in physically based rendering (PBR) texturing, primarily due to limited data and challenges in modeling multi-channel materials. In this work, we propose MuMA, a method for 3D PBR texturing through Multi-channel Multi-view generation and Agentic post-processing. Our approach features two key innovations: 1) We opt to model shaded and albedo appearance channels, where the shaded channels enables the integration intrinsic decomposition modules for material properties. 2) Leveraging multimodal large language models, we emulate artists' techniques for material assessment and selection. Experiments demonstrate that MuMA achieves superior results in visual quality and material fidelity compared to existing methods.
View on arXiv@article{zhu2025_2503.18461, title={ MuMA: 3D PBR Texturing via Multi-Channel Multi-View Generation and Agentic Post-Processing }, author={ Lingting Zhu and Jingrui Ye and Runze Zhang and Zeyu Hu and Yingda Yin and Lanjiong Li and Jinnan Chen and Shengju Qian and Xin Wang and Qingmin Liao and Lequan Yu }, journal={arXiv preprint arXiv:2503.18461}, year={ 2025 } }