DreaMoving: A Human Video Generation Framework based on Diffusion Models
Mengyang Feng
Jinlin Liu
Kai Yu
Yuan Yao
Zheng Hui
Xiefan Guo
Xianhui Lin
Haolan Xue
Chen Shi
Xiaowen Li
Aojie Li
Xiaoyang Kang
Biwen Lei
Miaomiao Cui
Peiran Ren
Xuansong Xie

Abstract
In this paper, we present DreaMoving, a diffusion-based controllable video generation framework to produce high-quality customized human videos. Specifically, given target identity and posture sequences, DreaMoving can generate a video of the target identity moving or dancing anywhere driven by the posture sequences. To this end, we propose a Video ControlNet for motion-controlling and a Content Guider for identity preserving. The proposed model is easy to use and can be adapted to most stylized diffusion models to generate diverse results. The project page is available at https://dreamoving.github.io/dreamoving
View on arXivComments on this paper