57
0

TransVDM: Motion-Constrained Video Diffusion Model for Transparent Video Synthesis

Abstract

Recent developments in Video Diffusion Models (VDMs) have demonstrated remarkable capability to generate high-quality video content. Nonetheless, the potential of VDMs for creating transparent videos remains largely uncharted. In this paper, we introduce TransVDM, the first diffusion-based model specifically designed for transparent video generation. TransVDM integrates a Transparent Variational Autoencoder (TVAE) and a pretrained UNet-based VDM, along with a novel Alpha Motion Constraint Module (AMCM). The TVAE captures the alpha channel transparency of video frames and encodes it into the latent space of the VDMs, facilitating a seamless transition to transparent video diffusion models. To improve the detection of transparent areas, the AMCM integrates motion constraints from the foreground within the VDM, helping to reduce undesirable artifacts. Moreover, we curate a dataset containing 250K transparent frames for training. Experimental results demonstrate the effectiveness of our approach across various benchmarks.

View on arXiv
@article{li2025_2502.19454,
  title={ TransVDM: Motion-Constrained Video Diffusion Model for Transparent Video Synthesis },
  author={ Menghao Li and Zhenghao Zhang and Junchao Liao and Long Qin and Weizhi Wang },
  journal={arXiv preprint arXiv:2502.19454},
  year={ 2025 }
}
Comments on this paper