57
2

Towards General Visual-Linguistic Face Forgery Detection(V2)

Abstract

Face manipulation techniques have achieved significant advances, presenting serious challenges to security and social trust. Recent works demonstrate that leveraging multimodal models can enhance the generalization and interpretability of face forgery detection. However, existing annotation approaches, whether through human labeling or direct Multimodal Large Language Model (MLLM) generation, often suffer from hallucination issues, leading to inaccurate text descriptions, especially for high-quality forgeries. To address this, we propose Face Forgery Text Generator (FFTG), a novel annotation pipeline that generates accurate text descriptions by leveraging forgery masks for initial region and type identification, followed by a comprehensive prompting strategy to guide MLLMs in reducing hallucination. We validate our approach through fine-tuning both CLIP with a three-branch training framework combining unimodal and multimodal objectives, and MLLMs with our structured annotations. Experimental results demonstrate that our method not only achieves more accurate annotations with higher region identification accuracy, but also leads to improvements in model performance across various forgery detection benchmarks. Our Codes are available inthis https URL.

View on arXiv
@article{sun2025_2502.20698,
  title={ Towards General Visual-Linguistic Face Forgery Detection(V2) },
  author={ Ke Sun and Shen Chen and Taiping Yao and Ziyin Zhou and Jiayi Ji and Xiaoshuai Sun and Chia-Wen Lin and Rongrong Ji },
  journal={arXiv preprint arXiv:2502.20698},
  year={ 2025 }
}
Comments on this paper