ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14971
50
1
v1v2 (latest)

DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis

20 May 2025
Prashanth Vijayaraghavan
Soroush Vosoughi
Lamogha Chizor
Raya Horesh
Rogerio Abreu de Paula
Ehsan Degan
Vandana Mukherjee
ArXiv (abs)PDFHTML
Main:3 Pages
5 Figures
6 Tables
Appendix:11 Pages
Abstract

Recent advancements in large language models (LLMs) have revolutionized natural language processing (NLP) and expanded their applications across diverse domains. However, despite their impressive capabilities, LLMs have been shown to reflect and perpetuate harmful societal biases, including those based on ethnicity, gender, and religion. A critical and underexplored issue is the reinforcement of caste-based biases, particularly towards India's marginalized caste groups such as Dalits and Shudras. In this paper, we address this gap by proposing DECASTE, a novel, multi-dimensional framework designed to detect and assess both implicit and explicit caste biases in LLMs. Our approach evaluates caste fairness across four dimensions: socio-cultural, economic, educational, and political, using a range of customized prompting strategies. By benchmarking several state-of-the-art LLMs, we reveal that these models systematically reinforce caste biases, with significant disparities observed in the treatment of oppressed versus dominant caste groups. For example, bias scores are notably elevated when comparing Dalits and Shudras with dominant caste groups, reflecting societal prejudices that persist in model outputs. These results expose the subtle yet pervasive caste biases in LLMs and emphasize the need for more comprehensive and inclusive bias evaluation methodologies that assess the potential risks of deploying such models in real-world contexts.

View on arXiv
@article{vijayaraghavan2025_2505.14971,
  title={ DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis },
  author={ Prashanth Vijayaraghavan and Soroush Vosoughi and Lamogha Chiazor and Raya Horesh and Rogerio Abreu de Paula and Ehsan Degan and Vandana Mukherjee },
  journal={arXiv preprint arXiv:2505.14971},
  year={ 2025 }
}
Comments on this paper