ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11985
43
0

No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language models

15 March 2025
Charaka Vinayak Kumar
Ashok Urlana
Gopichand Kanumolu
B. Garlapati
Pruthwik Mishra
    ELM
ArXivPDFHTML
Abstract

Advancements in Large Language Models (LLMs) have increased the performance of different natural language understanding as well as generation tasks. Although LLMs have breached the state-of-the-art performance in various tasks, they often reflect different forms of bias present in the training data. In the light of this perceived limitation, we provide a unified evaluation of benchmarks using a set of representative LLMs that cover different forms of biases starting from physical characteristics to socio-economic categories. Moreover, we propose five prompting approaches to carry out the bias detection task across different aspects of bias. Further, we formulate three research questions to gain valuable insight in detecting biases in LLMs using different approaches and evaluation metrics across benchmarks. The results indicate that each of the selected LLMs suffer from one or the other form of bias with the LLaMA3.1-8B model being the least biased. Finally, we conclude the paper with the identification of key challenges and possible future directions.

View on arXiv
@article{kumar2025_2503.11985,
  title={ No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language models },
  author={ Charaka Vinayak Kumar and Ashok Urlana and Gopichand Kanumolu and Bala Mallikarjunarao Garlapati and Pruthwik Mishra },
  journal={arXiv preprint arXiv:2503.11985},
  year={ 2025 }
}
Comments on this paper