ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.01926
38
2

Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs

4 February 2025
Angelina Wang
Michelle Phan
Daniel E. Ho
Sanmi Koyejo
ArXivPDFHTML
Abstract

Algorithmic fairness has conventionally adopted a perspective of racial color-blindness (i.e., difference unaware treatment). We contend that in a range of important settings, group difference awareness matters. For example, differentiating between groups may be necessary in legal contexts (e.g., the U.S. compulsory draft applies to men but not women) and harm assessments (e.g., calling a girl a terrorist may be less harmful than calling a Muslim person one). In our work we first introduce an important distinction between descriptive (fact-based), normative (value-based), and correlation (association-based) benchmarks. This distinction is significant because each category requires distinct interpretation and mitigation tailored to its specific characteristics. Then, we present a benchmark suite composed of eight different scenarios for a total of 16k questions that enables us to assess difference awareness. Finally, we show results across ten models that demonstrate difference awareness is a distinct dimension of fairness where existing bias mitigation strategies may backfire.

View on arXiv
@article{wang2025_2502.01926,
  title={ Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs },
  author={ Angelina Wang and Michelle Phan and Daniel E. Ho and Sanmi Koyejo },
  journal={arXiv preprint arXiv:2502.01926},
  year={ 2025 }
}
Comments on this paper