Efficient Fairness Testing in Large Language Models: Prioritizing Metamorphic Relations for Bias Detection

Large Language Models (LLMs) are increasingly deployed in various applications, raising critical concerns about fairness and potential biases in their outputs. This paper explores the prioritization of metamorphic relations (MRs) in metamorphic testing as a strategy to efficiently detect fairness issues within LLMs. Given the exponential growth of possible test cases, exhaustive testing is impractical; therefore, prioritizing MRs based on their effectiveness in detecting fairness violations is crucial. We apply a sentence diversity-based approach to compute and rank MRs to optimize fault detection. Experimental results demonstrate that our proposed prioritization approach improves fault detection rates by 22% compared to random prioritization and 12% compared to distance-based prioritization, while reducing the time to the first failure by 15% and 8%, respectively. Furthermore, our approach performs within 5% of fault-based prioritization in effectiveness, while significantly reducing the computational cost associated with fault labeling. These results validate the effectiveness of diversity-based MR prioritization in enhancing fairness testing for LLMs.
View on arXiv@article{giramata2025_2505.07870, title={ Efficient Fairness Testing in Large Language Models: Prioritizing Metamorphic Relations for Bias Detection }, author={ Suavis Giramata and Madhusudan Srinivasan and Venkat Naidu Gudivada and Upulee Kanewala }, journal={arXiv preprint arXiv:2505.07870}, year={ 2025 } }