ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20188
51
1

Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector

26 March 2025
Xiao Guo
Xiufeng Song
Yue Zhang
Xiaohong Liu
X. Liu
ArXivPDFHTML
Abstract

Deepfake detection is a long-established research topic vital for mitigating the spread of malicious misinformation. Unlike prior methods that provide either binary classification results or textual explanations separately, we introduce a novel method capable of generating both simultaneously. Our method harnesses the multi-modal learning capability of the pre-trained CLIP and the unprecedented interpretability of large language models (LLMs) to enhance both the generalization and explainability of deepfake detection. Specifically, we introduce a multi-modal face forgery detector (M2F2-Det) that employs tailored face forgery prompt learning, incorporating the pre-trained CLIP to improve generalization to unseen forgeries. Also, M2F2-Det incorporates an LLM to provide detailed textual explanations of its detection decisions, enhancing interpretability by bridging the gap between natural language and subtle cues of facial forgeries. Empirically, we evaluate M2F2-Det on both detection and explanation generation tasks, where it achieves state-of-the-art performance, demonstrating its effectiveness in identifying and explaining diverse forgeries.

View on arXiv
@article{guo2025_2503.20188,
  title={ Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector },
  author={ Xiao Guo and Xiufeng Song and Yue Zhang and Xiaohong Liu and Xiaoming Liu },
  journal={arXiv preprint arXiv:2503.20188},
  year={ 2025 }
}
Comments on this paper