Detecting Bias in Black-Box Models Using Transparent Model Distillation
- MLAU
Black-box risk scoring models permeate our lives, yet are typically proprietary and opaque. We propose a transparent model distillation approach to understand and detect bias in such models. Model distillation was originally designed to distill knowledge from a large, complex model (the teacher model) to a faster, simpler model (the student model) without significant loss in prediction accuracy. We add a third restriction - transparency - and show that it is possible to train transparent, yet still accurate student models to understand the predictions made by black-box teacher models. Central to our approach is the use of data sets that contain two labels to train on: the risk score as well as the actual outcome the risk score was intended to predict. We fully characterize the asymptotic distribution of the difference between the risk score and actual outcome models with variance estimates based on bootstrap-of-little-bags. This suggests a new method to detect bias in black-box risk scores via assessing if contributions of protected features to the risk score are statistically different from contributions to the actual outcome.
View on arXiv