10
0

Better Fair than Sorry: Adversarial Missing Data Imputation for Fair GNNs

Abstract

Graph Neural Networks (GNNs) have achieved state-of-the-art results in many relevant tasks where decisions might disproportionately impact specific communities. However, existing work on fair GNNs often assumes that either protected attributes are fully observed or that the missing protected attribute imputation is fair. In practice, biases in the imputation will propagate to the model outcomes, leading them to overestimate the fairness of their predictions. We address this challenge by proposing Better Fair than Sorry (BFtS), a fair missing data imputation model for protected attributes. The key design principle behind BFtS is that imputations should approximate the worst-case scenario for fairness -- i.e. when optimizing fairness is the hardest. We implement this idea using a 3-player adversarial scheme where two adversaries collaborate against a GNN-based classifier, and the classifier minimizes the maximum bias. Experiments using synthetic and real datasets show that BFtS often achieves a better fairness x accuracy trade-off than existing alternatives.

View on arXiv
@article{lina2025_2311.01591,
  title={ Better Fair than Sorry: Adversarial Missing Data Imputation for Fair GNNs },
  author={ Debolina Halder Lina and Arlei Silva },
  journal={arXiv preprint arXiv:2311.01591},
  year={ 2025 }
}
Comments on this paper