ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.13419
  4. Cited By
Are Explainability Tools Gender Biased? A Case Study on Face
  Presentation Attack Detection

Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection

26 April 2023
Marco Huber
Meiling Fang
Fadi Boutros
Naser Damer
    FaML
    CVBM
ArXivPDFHTML

Papers citing "Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection"

4 / 4 papers shown
Title
SynthASpoof: Developing Face Presentation Attack Detection Based on
  Privacy-friendly Synthetic Data
SynthASpoof: Developing Face Presentation Attack Detection Based on Privacy-friendly Synthetic Data
Meiling Fang
Marco Huber
Naser Damer
AAML
48
20
0
05 Mar 2023
Fairness in Face Presentation Attack Detection
Fairness in Face Presentation Attack Detection
Meiling Fang
Wufei Yang
Arjan Kuijper
Vitomir Štruc
Naser Damer
CVBM
38
15
0
19 Sep 2022
Explaining Face Presentation Attack Detection Using Natural Language
Explaining Face Presentation Attack Detection Using Natural Language
H. Mirzaalian
Mohamed E. Hussein
L. Spinoulas
Jonathan May
Wael AbdAlmageed
CVBM
FAtt
AAML
6
5
0
08 Nov 2021
CelebA-Spoof: Large-Scale Face Anti-Spoofing Dataset with Rich
  Annotations
CelebA-Spoof: Large-Scale Face Anti-Spoofing Dataset with Rich Annotations
Yuanhan Zhang
Zhen-fei Yin
Yidong Li
Guojun Yin
Junjie Yan
Jing Shao
Ziwei Liu
CVBM
42
158
0
24 Jul 2020
1