37
0

Re-HOLD: Video Hand Object Interaction Reenactment via adaptive Layout-instructed Diffusion Model

Abstract

Current digital human studies focusing on lip-syncing and body movement are no longer sufficient to meet the growing industrial demand, while human video generation techniques that support interacting with real-world environments (e.g., objects) have not been well investigated. Despite human hand synthesis already being an intricate problem, generating objects in contact with hands and their interactions presents an even more challenging task, especially when the objects exhibit obvious variations in size and shape. To tackle these issues, we present a novel video Reenactment framework focusing on Human-Object Interaction (HOI) via an adaptive Layout-instructed Diffusion model (Re-HOLD). Our key insight is to employ specialized layout representation for hands and objects, respectively. Such representations enable effective disentanglement of hand modeling and object adaptation to diverse motion sequences. To further improve the generation quality of HOI, we design an interactive textural enhancement module for both hands and objects by introducing two independent memory banks. We also propose a layout adjustment strategy for the cross-object reenactment scenario to adaptively adjust unreasonable layouts caused by diverse object sizes during inference. Comprehensive qualitative and quantitative evaluations demonstrate that our proposed framework significantly outperforms existing methods. Project page:this https URL.

View on arXiv
@article{fan2025_2503.16942,
  title={ Re-HOLD: Video Hand Object Interaction Reenactment via adaptive Layout-instructed Diffusion Model },
  author={ Yingying Fan and Quanwei Yang and Kaisiyuan Wang and Hang Zhou and Yingying Li and Haocheng Feng and Errui Ding and Yu Wu and Jingdong Wang },
  journal={arXiv preprint arXiv:2503.16942},
  year={ 2025 }
}
Comments on this paper