ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19982
58
1

SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining

25 March 2025
Pei-Kai Huang
Jun-Xiong Chong
Cheng-Hsuan Chiang
Tzu-Hsien Chen
Tyng-Luh Liu
Chiou-Ting Hsu
    CVBM
ArXivPDFHTML
Abstract

Face anti-spoofing (FAS) plays a pivotal role in ensuring the security and reliability of face recognition systems. With advancements in vision-language pretrained (VLP) models, recent two-class FAS techniques have leveraged the advantages of using VLP guidance, while this potential remains unexplored in one-class FAS methods. The one-class FAS focuses on learning intrinsic liveness features solely from live training images to differentiate between live and spoof faces. However, the lack of spoof training data can lead one-class FAS models to inadvertently incorporate domain information irrelevant to the live/spoof distinction (e.g., facial content), causing performance degradation when tested with a new application domain. To address this issue, we propose a novel framework called Spoof-aware one-class face anti-spoofing with Language Image Pretraining (SLIP). Given that live faces should ideally not be obscured by any spoof-attack-related objects (e.g., paper, or masks) and are assumed to yield zero spoof cue maps, we first propose an effective language-guided spoof cue map estimation to enhance one-class FAS models by simulating whether the underlying faces are covered by attack-related objects and generating corresponding nonzero spoof cue maps. Next, we introduce a novel prompt-driven liveness feature disentanglement to alleviate live/spoof-irrelative domain variations by disentangling live/spoof-relevant and domain-dependent information. Finally, we design an effective augmentation strategy by fusing latent features from live images and spoof prompts to generate spoof-like image features and thus diversify latent spoof features to facilitate the learning of one-class FAS. Our extensive experiments and ablation studies support that SLIP consistently outperforms previous one-class FAS methods.

View on arXiv
@article{huang2025_2503.19982,
  title={ SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining },
  author={ Pei-Kai Huang and Jun-Xiong Chong and Cheng-Hsuan Chiang and Tzu-Hsien Chen and Tyng-Luh Liu and Chiou-Ting Hsu },
  journal={arXiv preprint arXiv:2503.19982},
  year={ 2025 }
}
Comments on this paper