Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR)

Models trained on crowdsourced labels may not reflect broader population views, because those who work as annotators do not represent the population. We propose Population-Aligned Instance Replication (PAIR), a method to address bias caused by non-representative annotator pools. Using a simulation study of offensive language and hate speech, we create two types of annotators with different labeling tendencies and generate datasets with varying proportions of the types. We observe that models trained on unbalanced annotator pools show poor calibration compared to those trained on representative data. By duplicating labels from underrepresented annotator groups to match population proportions, PAIR reduces bias without collecting additional annotations. These results suggest that statistical techniques from survey research can improve model performance. We conclude with practical recommendations for improving the representativity of training data and model performance.
View on arXiv@article{eckman2025_2501.06826, title={ Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR) }, author={ Stephanie Eckman and Bolei Ma and Christoph Kern and Rob Chew and Barbara Plank and Frauke Kreuter }, journal={arXiv preprint arXiv:2501.06826}, year={ 2025 } }