ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.13138
  4. Cited By
Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation
v1v2 (latest)

Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation

17 July 2025
Hadi Mohammadi
Tina Shahedi
Pablo Mosteiro
Massimo Poesio
Ayoub Bagheri
Anastasia Giachanou
ArXiv (abs)PDFHTML

Papers citing "Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation"

1 / 1 papers shown
Title
Do Large Language Models Understand Morality Across Cultures?
Do Large Language Models Understand Morality Across Cultures?
Hadi Mohammadi
Yasmeen F.S.S. Meijer
Efthymia Papadopoulou
Ayoub Bagheri
183
1
0
28 Jul 2025
1