ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02657
  4. Cited By
Hate Personified: Investigating the role of LLMs in content moderation

Hate Personified: Investigating the role of LLMs in content moderation

3 October 2024
Sarah Masud
Sahajpreet Singh
Viktor Hangya
Alexander M. Fraser
Tanmoy Chakraborty
ArXivPDFHTML

Papers citing "Hate Personified: Investigating the role of LLMs in content moderation"

1 / 1 papers shown
Title
Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study
Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study
Faeze Ghorbanpour
Daryna Dementieva
Alexander M. Fraser
40
0
0
09 May 2025
1