4
0

FaceShield: Explainable Face Anti-Spoofing with Multimodal Large Language Models

Abstract

Face anti-spoofing (FAS) is crucial for protecting facial recognition systems from presentation attacks. Previous methods approached this task as a classification problem, lacking interpretability and reasoning behind the predicted results. Recently, multimodal large language models (MLLMs) have shown strong capabilities in perception, reasoning, and decision-making in visual tasks. However, there is currently no universal and comprehensive MLLM and dataset specifically designed for FAS task. To address this gap, we propose FaceShield, a MLLM for FAS, along with the corresponding pre-training and supervised fine-tuning (SFT) datasets, FaceShield-pre10K and FaceShield-sft45K. FaceShield is capable of determining the authenticity of faces, identifying types of spoofing attacks, providing reasoning for its judgments, and detecting attack areas. Specifically, we employ spoof-aware vision perception (SAVP) that incorporates both the original image and auxiliary information based on prior knowledge. We then use an prompt-guided vision token masking (PVTM) strategy to random mask vision tokens, thereby improving the model's generalization ability. We conducted extensive experiments on three benchmark datasets, demonstrating that FaceShield significantly outperforms previous deep learning models and general MLLMs on four FAS tasks, i.e., coarse-grained classification, fine-grained classification, reasoning, and attack localization. Our instruction datasets, protocols, and codes will be released soon.

View on arXiv
@article{wang2025_2505.09415,
  title={ FaceShield: Explainable Face Anti-Spoofing with Multimodal Large Language Models },
  author={ Hongyang Wang and Yichen Shi and Zhuofu Tao and Yuhao Gao and Liepiao Zhang and Xun Lin and Jun Feng and Xiaochen Yuan and Zitong Yu and Xiaochun Cao },
  journal={arXiv preprint arXiv:2505.09415},
  year={ 2025 }
}
Comments on this paper