Towards More General Video-based Deepfake Detection through Facial Component Guided Adaptation for Foundation Model

Generative models have enabled the creation of highly realistic facial-synthetic images, raising significant concerns due to their potential for misuse. Despite rapid advancements in the field of deepfake detection, developing efficient approaches to leverage foundation models for improved generalizability to unseen forgery samples remains challenging. To address this challenge, we propose a novel side-network-based decoder that extracts spatial and temporal cues using the CLIP image encoder for generalized video-based Deepfake detection. Additionally, we introduce Facial Component Guidance (FCG) to enhance spatial learning generalizability by encouraging the model to focus on key facial regions. By leveraging the generic features of a vision-language foundation model, our approach demonstrates promising generalizability on challenging Deepfake datasets while also exhibiting superiority in training data efficiency, parameter efficiency, and model robustness.
View on arXiv@article{han2025_2404.05583, title={ Towards More General Video-based Deepfake Detection through Facial Component Guided Adaptation for Foundation Model }, author={ Yue-Hua Han and Tai-Ming Huang and Kai-Lung Hua and Jun-Cheng Chen }, journal={arXiv preprint arXiv:2404.05583}, year={ 2025 } }