55
0

SRM-Hair: Single Image Head Mesh Reconstruction via 3D Morphable Hair

Abstract

3D Morphable Models (3DMMs) have played a pivotal role as a fundamental representation or initialization for 3D avatar animation and reconstruction. However, extending 3DMMs to hair remains challenging due to the difficulty of enforcing vertex-level consistent semantic meaning across hair shapes. This paper introduces a novel method, Semantic-consistent Ray Modeling of Hair (SRM-Hair), for making 3D hair morphable and controlled by coefficients. The key contribution lies in semantic-consistent ray modeling, which extracts ordered hair surface vertices and exhibits notable properties such as additivity for hairstyle fusion, adaptability, flipping, and thickness modification. We collect a dataset of over 250 high-fidelity real hair scans paired with 3D face data to serve as a prior for the 3D morphable hair. Based on this, SRM-Hair can reconstruct a hair mesh combined with a 3D head from a single image. Note that SRM-Hair produces an independent hair mesh, facilitating applications in virtual avatar creation, realistic animation, and high-fidelity hair rendering. Both quantitative and qualitative experiments demonstrate that SRM-Hair achieves state-of-the-art performance in 3D mesh reconstruction. Our project is available atthis https URL

View on arXiv
@article{wang2025_2503.06154,
  title={ SRM-Hair: Single Image Head Mesh Reconstruction via 3D Morphable Hair },
  author={ Zidu Wang and Jiankuo Zhao and Miao Xu and Xiangyu Zhu and Zhen Lei },
  journal={arXiv preprint arXiv:2503.06154},
  year={ 2025 }
}
Comments on this paper