55
3

VidTwin: Video VAE with Decoupled Structure and Dynamics

Abstract

Recent advancements in video autoencoders (Video AEs) have significantly improved the quality and efficiency of video generation. In this paper, we propose a novel and compact video autoencoder, VidTwin, that decouples video into two distinct latent spaces: Structure latent vectors, which capture overall content and global movement, and Dynamics latent vectors, which represent fine-grained details and rapid movements. Specifically, our approach leverages an Encoder-Decoder backbone, augmented with two submodules for extracting these latent spaces, respectively. The first submodule employs a Q-Former to extract low-frequency motion trends, followed by downsampling blocks to remove redundant content details. The second averages the latent vectors along the spatial dimension to capture rapid motion. Extensive experiments show that VidTwin achieves a high compression rate of 0.20% with high reconstruction quality (PSNR of 28.14 on the MCL-JCV dataset), and performs efficiently and effectively in downstream generative tasks. Moreover, our model demonstrates explainability and scalability, paving the way for future research in video latent representation and generation. Check our project page for more details:this https URL.

View on arXiv
@article{wang2025_2412.17726,
  title={ VidTwin: Video VAE with Decoupled Structure and Dynamics },
  author={ Yuchi Wang and Junliang Guo and Xinyi Xie and Tianyu He and Xu Sun and Jiang Bian },
  journal={arXiv preprint arXiv:2412.17726},
  year={ 2025 }
}
Comments on this paper