ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.05765
34
0

Encoding Inequity: Examining Demographic Bias in LLM-Driven Robot Caregiving

24 February 2025
Raj Korpan
ArXivPDFHTML
Abstract

As robots take on caregiving roles, ensuring equitable and unbiased interactions with diverse populations is critical. Although Large Language Models (LLMs) serve as key components in shaping robotic behavior, speech, and decision-making, these models may encode and propagate societal biases, leading to disparities in care based on demographic factors. This paper examines how LLM-generated responses shape robot caregiving characteristics and responsibilities when prompted with different demographic information related to sex, gender, sexuality, race, ethnicity, nationality, disability, and age. Findings show simplified descriptions for disability and age, lower sentiment for disability and LGBTQ+ identities, and distinct clustering patterns reinforcing stereotypes in caregiving narratives. These results emphasize the need for ethical and inclusive HRI design.

View on arXiv
@article{korpan2025_2503.05765,
  title={ Encoding Inequity: Examining Demographic Bias in LLM-Driven Robot Caregiving },
  author={ Raj Korpan },
  journal={arXiv preprint arXiv:2503.05765},
  year={ 2025 }
}
Comments on this paper