21
13

Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation

Abstract

Standard benchmarks of bias and fairness in large language models (LLMs) measure the association between social attributes implied in user prompts and short LLM responses. In the commonly studied domain of gender-occupation bias, we test whether these benchmarks are robust to lengthening the LLM responses as a measure of Realistic Use and Tangible Effects (i.e., RUTEd evaluations). From the current literature, we adapt three standard bias metrics (neutrality, skew, and stereotype), and we develop analogous RUTEd evaluations from three contexts of real-world use: children's bedtime stories, user personas, and English language learning exercises. We find that standard bias metrics have no significant correlation with the more realistic bias metrics. For example, selecting the least biased model based on the standard "trick tests" coincides with selecting the least biased model as measured in more realistic use no more than random chance. We suggest that there is not yet evidence to justify standard benchmarks as reliable proxies of real-world biases, and we encourage further development of context-specific RUTEd evaluations.

View on arXiv
@article{lum2025_2402.12649,
  title={ Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation },
  author={ Kristian Lum and Jacy Reese Anthis and Kevin Robinson and Chirag Nagpal and Alexander DÁmour },
  journal={arXiv preprint arXiv:2402.12649},
  year={ 2025 }
}
Comments on this paper