23
0

Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs

Main:9 Pages
14 Figures
Bibliography:4 Pages
10 Tables
Appendix:11 Pages
Abstract

The recent rapid adoption of large language models (LLMs) highlights the critical need for benchmarking their fairness. Conventional fairness metrics, which focus on discrete accuracy-based evaluations (i.e., prediction correctness), fail to capture the implicit impact of model uncertainty (e.g., higher model confidence about one group over another despite similar accuracy). To address this limitation, we propose an uncertainty-aware fairness metric, UCerF, to enable a fine-grained evaluation of model fairness that is more reflective of the internal bias in model decisions compared to conventional fairness measures. Furthermore, observing data size, diversity, and clarity issues in current datasets, we introduce a new gender-occupation fairness evaluation dataset with 31,756 samples for co-reference resolution, offering a more diverse and suitable dataset for evaluating modern LLMs. We establish a benchmark, using our metric and dataset, and apply it to evaluate the behavior of ten open-source LLMs. For example, Mistral-7B exhibits suboptimal fairness due to high confidence in incorrect predictions, a detail overlooked by Equalized Odds but captured by UCerF. Overall, our proposed LLM benchmark, which evaluates fairness with uncertainty awareness, paves the way for developing more transparent and accountable AI systems.

View on arXiv
@article{wang2025_2505.23996,
  title={ Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs },
  author={ Yinong Oliver Wang and Nivedha Sivakumar and Falaah Arif Khan and Rin Metcalf Susa and Adam Golinski and Natalie Mackraz and Barry-John Theobald and Luca Zappella and Nicholas Apostoloff },
  journal={arXiv preprint arXiv:2505.23996},
  year={ 2025 }
}
Comments on this paper