513

Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

IEEE Transactions on Image Processing (TIP), 2021
Abstract

There are demographic biases in current models used for facial recognition (FR). Our Balanced Faces In the Wild (BFW) dataset serves as a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup. We show results are non-optimal when a single score threshold determines whether sample pairs are genuine or imposters. Within subgroups, performance often varies significantly from the global average. Thus, claims of specific error rates only hold for populations matching the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted using state-of-the-art neural networks. This technique balances performance, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while decreasing the demographic information they contain. The removal of demographic knowledge prevents potential future biases from being injected into decision-making. This removal improves privacy since less information is available or inferred about an individual. We explore this qualitatively; we also show quantitatively that subgroup classifiers no longer learn from the features of the proposed domain adaptation scheme. For source code and data descriptions, see https://github.com/visionjo/facerec-bias-bfw.

View on arXiv
Comments on this paper