ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.18816
43
16

Garment3DGen: 3D Garment Stylization and Texture Generation

27 March 2024
N. Sarafianos
Tuur Stuyck
Xiaoyu Xiang
Yilei Li
Jovan Popovic
Rakesh Ranjan
    3DH
ArXivPDFHTML
Abstract

We introduce Garment3DGen a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance. Our proposed approach allows users to generate 3D textured clothes based on both real and synthetic images, such as those generated by text prompts. The generated assets can be directly draped and simulated on human bodies. We leverage the recent progress of image-to-3D diffusion methods to generate 3D garment geometries. However, since these geometries cannot be utilized directly for downstream tasks, we propose to use them as pseudo ground-truth and set up a mesh deformation optimization procedure that deforms a base template mesh to match the generated 3D target. Carefully designed losses allow the base mesh to freely deform towards the desired target, yet preserve mesh quality and topology such that they can be simulated. Finally, we generate high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance, allowing us to render the generated 3D assets. With Garment3DGen users can generate the simulation-ready 3D garment of their choice without the need of artist intervention. We present a plethora of quantitative and qualitative comparisons on various assets and demonstrate that Garment3DGen unlocks key applications ranging from sketch-to-simulated garments or interacting with the garments in VR. Code is publicly available.

View on arXiv
@article{sarafianos2025_2403.18816,
  title={ Garment3DGen: 3D Garment Stylization and Texture Generation },
  author={ Nikolaos Sarafianos and Tuur Stuyck and Xiaoyu Xiang and Yilei Li and Jovan Popovic and Rakesh Ranjan },
  journal={arXiv preprint arXiv:2403.18816},
  year={ 2025 }
}
Comments on this paper