ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08731
43
0

FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods

11 March 2025
Seyyed Mohammad Sadegh Moosavi Khorzooghi
Poojitha Thota
Mohit Singhal
Abolfazl Asudeh
Gautam Das
Shirin Nilizadeh
    AAML
ArXivPDFHTML
Abstract

The lack of a common platform and benchmark datasets for evaluating face obfuscation methods has been a challenge, with every method being tested using arbitrary experiments, datasets, and metrics. While prior work has demonstrated that face recognition systems exhibit bias against some demographic groups, there exists a substantial gap in our understanding regarding the fairness of face obfuscation methods. Providing fair face obfuscation methods can ensure equitable protection across diverse demographic groups, especially since they can be used to preserve the privacy of vulnerable populations. To address these gaps, this paper introduces a comprehensive framework, named FairDeFace, designed to assess the adversarial robustness and fairness of face obfuscation methods. The framework introduces a set of modules encompassing data benchmarks, face detection and recognition algorithms, adversarial models, utility detection models, and fairness metrics. FairDeFace serves as a versatile platform where any face obfuscation method can be integrated, allowing for rigorous testing and comparison with other state-of-the-art methods. In its current implementation, FairDeFace incorporates 6 attacks, and several privacy, utility and fairness metrics. Using FairDeFace, and by conducting more than 500 experiments, we evaluated and compared the adversarial robustness of seven face obfuscation methods. This extensive analysis led to many interesting findings both in terms of the degree of robustness of existing methods and their biases against some gender or racial groups. FairDeFace also uses visualization of focused areas for both obfuscation and verification attacks to show not only which areas are mostly changed in the obfuscation process for some demographics, but also why they failed through focus area comparison of obfuscation and verification.

View on arXiv
@article{khorzooghi2025_2503.08731,
  title={ FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods },
  author={ Seyyed Mohammad Sadegh Moosavi Khorzooghi and Poojitha Thota and Mohit Singhal and Abolfazl Asudeh and Gautam Das and Shirin Nilizadeh },
  journal={arXiv preprint arXiv:2503.08731},
  year={ 2025 }
}
Comments on this paper