Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information
- FaML
Graph neural networks (GNNs) have achieved state-of-the-art performance in modeling graphs. Despite its great success, as with many other models, GNNs have the risk to inherit the bias from the training data. In addition, the bias of GNN can be magnified by the graph structures and message-passing mechanism of GNNs. The risk of discrimination limits the adoption of GNNs in sensitive domains such as credit score estimation. Though extensive studies of fair classification have been conducted on i.i.d data, methods to address the problem of discrimination on non-i.i.d data are rather limited. Furthermore, the practical scenario of sparse annotations in sensitive attributes is rarely considered in existing works. Therefore, we study the novel and important problem of learning fair GNNs with limited sensitive information. We propose a novel framework called FairGNN, which is able to reduce the bias of GNNs and maintain high node classification accuracy by leveraging graph structured data and sensitive information. Theoretical analysis is conducted to show that FairGNN can ensure fairness under mild conditions given limited nodes with known sensitive attributes. Experiments on real-world datasets demonstrated the effectiveness of the proposed framework in eliminating discrimination while maintaining high node classification accuracy.
View on arXiv