41
0

DynamiCtrl: Rethinking the Basic Structure and the Role of Text for High-quality Human Image Animation

Abstract

Human image animation has recently gained significant attention due to advancements in generative models. However, existing methods still face two major challenges: (1) architectural limitations, most models rely on U-Net, which underperforms compared to the MM-DiT; and (2) the neglect of textual information, which can enhance controllability. In this work, we introduce DynamiCtrl, a novel framework that not only explores different pose-guided control structures in MM-DiT, but also reemphasizes the crucial role of text in this task. Specifically, we employ a Shared VAE encoder for both reference images and driving pose videos, eliminating the need for an additional pose encoder and simplifying the overall framework. To incorporate pose features into the full attention blocks, we propose Pose-adaptive Layer Norm (PadaLN), which utilizes adaptive layer normalization to encode sparse pose features. The encoded features are directly added to the visual input, preserving the spatiotemporal consistency of the backbone while effectively introducing pose control into MM-DiT. Furthermore, within the full attention mechanism, we align textual and visual features to enhance controllability. By leveraging text, we not only enable fine-grained control over the generated content, but also, for the first time, achieve simultaneous control over both background and motion. Experimental results verify the superiority of DynamiCtrl on benchmark datasets, demonstrating its strong identity preservation, heterogeneous character driving, background controllability, and high-quality synthesis. The project page is available atthis https URL.

View on arXiv
@article{zhao2025_2503.21246,
  title={ DynamiCtrl: Rethinking the Basic Structure and the Role of Text for High-quality Human Image Animation },
  author={ Haoyu Zhao and Zhongang Qi and Cong Wang and Qingping Zheng and Guansong Lu and Fei Chen and Hang Xu and Zuxuan Wu },
  journal={arXiv preprint arXiv:2503.21246},
  year={ 2025 }
}
Comments on this paper