58
17

Censored and Fair Universal Representations using Generative Adversarial Models

Abstract

We present a data-driven framework for learning \textit{censored and fair universal representations} (CFUR) that ensure statistical fairness guarantees for all downstream learning tasks that may not be known \textit{a priori}. Our framework leverages recent advancements in adversarial learning to allow a data holder to learn censored and fair representations that decouple a set of sensitive attributes from the rest of the dataset. The resulting problem of finding the optimal randomizing mechanism with specific fairness/censoring guarantees is formulated as a constrained minimax game between an encoder and an adversary where the constraint ensures a measure of usefulness (utility) of the representation. We show that for appropriately chosen adversarial loss functions, our framework enables defining demographic parity for fair representations and also clarifies {the optimal adversarial strategy against strong information-theoretic adversaries}. We evaluate the performance of our proposed framework on multi-dimensional Gaussian mixture models and publicly datasets including the UCI Census, GENKI, Human Activity Recognition (HAR), and the UTKFace. Our experimental results show that multiple sensitive features can be effectively censored while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, our results also make precise the tradeoff between censoring and fidelity for the representation as well as the fairness-utility tradeoffs for downstream tasks.

View on arXiv
Comments on this paper