ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.00521
  4. Cited By
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in
  NLP Systems through an Intersectional Lens

Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens

1 October 2021
Saad Hassan
Matt Huenerfauth
Cecilia Ovesdotter Alm
ArXiv (abs)PDFHTML

Papers citing "Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens"

20 / 20 papers shown
Cross-Lingual Stability and Bias in Instruction-Tuned Language Models for Humanitarian NLP
Cross-Lingual Stability and Bias in Instruction-Tuned Language Models for Humanitarian NLP
Poli A. Nemkova
Amrit Adhikari
Matthew Pearson
Vamsi Krishna Sadu
Mark V. Albert
100
1
0
26 Oct 2025
ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios
ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios
Mahika Phutane
Hayoung Jung
Matthew Kim
Tanushree Mitra
Aditya Vashistha
171
1
0
13 Oct 2025
Who Gets Left Behind? Auditing Disability Inclusivity in Large Language Models
Who Gets Left Behind? Auditing Disability Inclusivity in Large Language Models
Deepika Dash
Yeshil Bangera
Mithil Bangera
Gouthami Vadithya
Srikant Panda
ALMELM
142
0
0
31 Aug 2025
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Srikant Panda
Vishnu Hari
Kalpana Panda
Amit Agarwal
Hitesh Laxmichand Patel
256
7
0
18 Aug 2025
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Yuchu Jiang
Jian Zhao
Yuchen Yuan
Tianle Zhang
Yao Huang
...
Ya Zhang
Shuicheng Yan
Chi Zhang
Z. He
Xuelong Li
SILM
488
4
0
12 Aug 2025
Theories of "Sexuality" in Natural Language Processing Bias Research
Theories of "Sexuality" in Natural Language Processing Bias Research
Jacob Hobbs
202
1
0
22 Jun 2025
Fairness Definitions in Language Models Explained
Fairness Definitions in Language Models Explained
Thang Viet Doan
Zhibo Chu
Sribala Vidyadhari Chinta
Wenbin Zhang
ALM
427
21
0
26 Jul 2024
Fairness Certification for Natural Language Processing and Large
  Language Models
Fairness Certification for Natural Language Processing and Large Language Models
Vincent Freiberger
Erik Buchmann
333
2
0
02 Jan 2024
Global Voices, Local Biases: Socio-Cultural Prejudices across Languages
Global Voices, Local Biases: Socio-Cultural Prejudices across LanguagesConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
A. Mukherjee
Chahat Raj
Ziwei Zhu
Antonios Anastasopoulos
259
24
0
26 Oct 2023
Privacy Preserving Large Language Models: ChatGPT Case Study Based
  Vision and Framework
Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and FrameworkIET Blockchain (IET Blockchain), 2023
Imdad Ullah
Najm Hassan
S. Gill
Basem Suleiman
T. Ahanger
Zawar Shah
Junaid Qadir
S. Kanhere
238
24
0
19 Oct 2023
An Autoethnographic Case Study of Generative Artificial Intelligence's
  Utility for Accessibility
An Autoethnographic Case Study of Generative Artificial Intelligence's Utility for AccessibilityInternational ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), 2023
Kate Glazko
Momona Yamagami
Aashaka Desai
Kelly Avery Mack
Venkatesh Potluri
Xuhai Xu
Jennifer Mankoff
217
67
0
19 Aug 2023
NBIAS: A Natural Language Processing Framework for Bias Identification
  in Text
NBIAS: A Natural Language Processing Framework for Bias Identification in TextExpert systems with applications (ESWA), 2023
Shaina Razaa
Muskan Garg
Deepak John Reji
Syed Raza Bashir
Chen Ding
376
72
0
03 Aug 2023
Queer People are People First: Deconstructing Sexual Identity
  Stereotypes in Large Language Models
Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models
Harnoor Dhingra
Preetiha Jayashanker
Sayali S. Moghe
Emma Strubell
310
18
0
30 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
496
34
0
13 Jun 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
267
15
0
22 May 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive SurveyInternational Conference on Computational Logic (ICCL), 2023
Tyler A. Chang
Benjamin Bergen
VLMLRMLM&MA
423
148
0
20 Mar 2023
Fairness in Language Models Beyond English: Gaps and Challenges
Fairness in Language Models Beyond English: Gaps and ChallengesFindings (Findings), 2023
Krithika Ramesh
Sunayana Sitaram
Monojit Choudhury
371
29
0
24 Feb 2023
Data Representativeness in Accessibility Datasets: A Meta-Analysis
Data Representativeness in Accessibility Datasets: A Meta-AnalysisInternational ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), 2022
Rie Kamikubo
Lining Wang
Crystal Marte
Amnah Mahmood
Hernisa Kacorri
215
32
0
16 Jul 2022
Using BERT Embeddings to Model Word Importance in Conversational
  Transcripts for Deaf and Hard of Hearing Users
Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users
Akhter Al Amin
Saad Hassan
Cecilia Ovesdotter Alm
Matt Huenerfauth
156
6
0
24 Jun 2022
A Disability Lens towards Biases in GPT-3 Generated Open-Ended Languages
A Disability Lens towards Biases in GPT-3 Generated Open-Ended Languages
Akhter Al Amin
Kazi Sinthia Kabir
160
7
0
23 Jun 2022
1
Page 1 of 1