5
0

CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation

Xiangyang Luo
Ye Zhu
Yunfei Liu
Lijian Lin
Cong Wan
Zijian Cai
Shao-Lun Huang
Yu Li
Main:8 Pages
15 Figures
Bibliography:3 Pages
3 Tables
Appendix:3 Pages
Abstract

Video face swapping aims to address two primary challenges: effectively transferring the source identity to the target video and accurately preserving the dynamic attributes of the target face, such as head poses, facial expressions, lip-sync, \etc. Existing methods mainly focus on achieving high-quality identity transfer but often fall short in maintaining the dynamic attributes of the target face, leading to inconsistent results. We attribute this issue to the inherent coupling of facial appearance and motion in videos. To address this, we propose CanonSwap, a novel video face-swapping framework that decouples motion information from appearance information. Specifically, CanonSwap first eliminates motion-related information, enabling identity modification within a unified canonical space. Subsequently, the swapped feature is reintegrated into the original video space, ensuring the preservation of the target face's dynamic attributes. To further achieve precise identity transfer with minimal artifacts and enhanced realism, we design a Partial Identity Modulation module that adaptively integrates source identity features using a spatial mask to restrict modifications to facial regions. Additionally, we introduce several fine-grained synchronization metrics to comprehensively evaluate the performance of video face swapping methods. Extensive experiments demonstrate that our method significantly outperforms existing approaches in terms of visual quality, temporal consistency, and identity preservation. Our project page are publicly available atthis https URL.

View on arXiv
@article{luo2025_2507.02691,
  title={ CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation },
  author={ Xiangyang Luo and Ye Zhu and Yunfei Liu and Lijian Lin and Cong Wan and Zijian Cai and Shao-Lun Huang and Yu Li },
  journal={arXiv preprint arXiv:2507.02691},
  year={ 2025 }
}
Comments on this paper