41
0

InterAnimate: Taming Region-aware Diffusion Model for Realistic Human Interaction Animation

Abstract

Recent video generation research has focused heavily on isolated actions, leaving interactive motions-such as hand-face interactions-largely unexamined. These interactions are essential for emerging biometric authentication systems, which rely on interactive motion-based anti-spoofing approaches. From a security perspective, there is a growing need for large-scale, high-quality interactive videos to train and strengthen authentication models. In this work, we introduce a novel paradigm for animating realistic hand-face interactions. Our approach simultaneously learns spatio-temporal contact dynamics and biomechanically plausible deformation effects, enabling natural interactions where hand movements induce anatomically accurate facial deformations while maintaining collision-free contact. To facilitate this research, we present InterHF, a large-scale hand-face interaction dataset featuring 18 interaction patterns and 90,000 annotated videos. Additionally, we propose InterAnimate, a region-aware diffusion model designed specifically for interaction animation. InterAnimate leverages learnable spatial and temporal latents to effectively capture dynamic interaction priors and integrates a region-aware interaction mechanism that injects these priors into the denoising process. To the best of our knowledge, this work represents the first large-scale effort to systematically study human hand-face interactions. Qualitative and quantitative results show InterAnimate produces highly realistic animations, setting a new benchmark. Code and data will be made public to advance research.

View on arXiv
@article{lin2025_2504.10905,
  title={ InterAnimate: Taming Region-aware Diffusion Model for Realistic Human Interaction Animation },
  author={ Yukang Lin and Yan Hong and Zunnan Xu and Xindi Li and Chao Xu and Chuanbiao Song and Ronghui Li and Haoxing Chen and Jun Lan and Huijia Zhu and Weiqiang Wang and Jianfu Zhang and Xiu Li },
  journal={arXiv preprint arXiv:2504.10905},
  year={ 2025 }
}
Comments on this paper