28
1

The Impact of Unstated Norms in Bias Analysis of Language Models

Abstract

Bias in large language models (LLMs) has many forms, from overt discrimination to implicit stereotypes. Counterfactual bias evaluation is a widely used approach to quantifying bias and often relies on template-based probes that explicitly state group membership. It measures whether the outcome of a task performed by an LLM is invariant to a change in group membership. In this work, we find that template-based probes can lead to unrealistic bias measurements. For example, LLMs appear to mistakenly cast text associated with White race as negative at higher rates than other groups. We hypothesize that this arises artificially via a mismatch between commonly unstated norms, in the form of markedness, in the pretraining text of LLMs (e.g., Black president vs. president) and templates used for bias measurement (e.g., Black president vs. White president). The findings highlight the potential misleading impact of varying group membership through explicit mention in counterfactual bias quantification.

View on arXiv
@article{kohankhaki2025_2404.03471,
  title={ The Impact of Unstated Norms in Bias Analysis of Language Models },
  author={ Farnaz Kohankhaki and D. B. Emerson and Jacob-Junqi Tian and Laleh Seyyed-Kalantari and Faiza Khan Khattak },
  journal={arXiv preprint arXiv:2404.03471},
  year={ 2025 }
}
Comments on this paper