Training and Evaluating with Human Label Variation: An Empirical Study

Human label variation (HLV) challenges the standard assumption that a labelled instance has a single ground truth, instead embracing the natural variation in human annotation to train and evaluate models. While various training methods and metrics for HLV have been proposed, it is still unclear which methods and metrics perform best in what settings. We propose new evaluation metrics for HLV leveraging fuzzy set theory. Since these new proposed metrics are differentiable, we then in turn experiment with employing these metrics as training objectives. We conduct an extensive study over 6 HLV datasets testing 14 training methods and 6 evaluation metrics. We find that training on either disaggregated annotations or soft labels performs best across metrics, outperforming training using the proposed training objectives with differentiable metrics. We also show that our proposed soft metric is more interpretable and correlates best with human preference.
View on arXiv@article{kurniawan2025_2502.01891, title={ Training and Evaluating with Human Label Variation: An Empirical Study }, author={ Kemal Kurniawan and Meladel Mistica and Timothy Baldwin and Jey Han Lau }, journal={arXiv preprint arXiv:2502.01891}, year={ 2025 } }