ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.09992
  4. Cited By
Evaluating Gender Bias of LLMs in Making Morality Judgements

Evaluating Gender Bias of LLMs in Making Morality Judgements

13 October 2024
Divij Bajaj
Yuanyuan Lei
Jonathan Tong
Ruihong Huang
ArXivPDFHTML

Papers citing "Evaluating Gender Bias of LLMs in Making Morality Judgements"

1 / 1 papers shown
Title
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
Paul Röttger
Fabio Pernisi
Bertie Vidgen
Dirk Hovy
ELM
KELM
49
30
0
08 Apr 2024
1