62
1

DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment Animation

Abstract

We present a data-driven method for learning to generate animations of 3D garments using a 2D image diffusion model. In contrast to existing methods, typically based on fully connected networks, graph neural networks, or generative adversarial networks, which have difficulties to cope with parametric garments with fine wrinkle detail, our approach is able to synthesize high-quality 3D animations for a wide variety of garments and body shapes, while being agnostic to the garment mesh topology. Our key idea is to represent 3D garment deformations as a 2D layout-consistent texture that encodes 3D offsets with respect to a parametric garment template. Using this representation, we encode a large dataset of garments simulated in various motions and shapes and train a novel conditional diffusion model that is able to synthesize high-quality pose-shape-and-design dependent 3D garment deformations. Since our model is generative, we can synthesize various plausible deformations for a given target pose, shape, and design. Additionally, we show that we can further condition our model using an existing garment state, which enables the generation of temporally coherent sequences.

View on arXiv
@article{vidaurre2025_2503.18370,
  title={ DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment Animation },
  author={ Raquel Vidaurre and Elena Garces and Dan Casas },
  journal={arXiv preprint arXiv:2503.18370},
  year={ 2025 }
}
Comments on this paper