ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22984
32
0

Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing

29 March 2025
Z. Li
Tianchen Zhao
Xiang Xu
Zheng Zhang
Zhihua Li
Xuanbai Chen
Q. Zhang
Alessandro Bergamo
Anil K. Jain
Yifan Xing
ArXivPDFHTML
Abstract

Developing a face anti-spoofing model that meets the security requirements of clients worldwide is challenging due to the domain gap between training datasets and diverse end-user test data. Moreover, for security and privacy reasons, it is undesirable for clients to share a large amount of their face data with service providers. In this work, we introduce a novel method in which the face anti-spoofing model can be adapted by the client itself to a target domain at test time using only a small sample of data while keeping model parameters and training data inaccessible to the client. Specifically, we develop a prototype-based base model and an optimal transport-guided adaptor that enables adaptation in either a lightweight training or training-free fashion, without updating base model's parameters. Furthermore, we propose geodesic mixup, an optimal transport-based synthesis method that generates augmented training data along the geodesic path between source prototypes and target data distribution. This allows training a lightweight classifier to effectively adapt to target-specific characteristics while retaining essential knowledge learned from the source domain. In cross-domain and cross-attack settings, compared with recent methods, our method achieves average relative improvements of 19.17% in HTER and 8.58% in AUC, respectively.

View on arXiv
@article{li2025_2503.22984,
  title={ Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing },
  author={ Zhuowei Li and Tianchen Zhao and Xiang Xu and Zheng Zhang and Zhihua Li and Xuanbai Chen and Qin Zhang and Alessandro Bergamo and Anil K. Jain and Yifan Xing },
  journal={arXiv preprint arXiv:2503.22984},
  year={ 2025 }
}
Comments on this paper