25
0

Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study

Abstract

Despite growing interest in automated hate speech detection, most existing approaches overlook the linguistic diversity of online content. Multilingual instruction-tuned large language models such as LLaMA, Aya, Qwen, and BloomZ offer promising capabilities across languages, but their effectiveness in identifying hate speech through zero-shot and few-shot prompting remains underexplored. This work evaluates LLM prompting-based detection across eight non-English languages, utilizing several prompting techniques and comparing them to fine-tuned encoder models. We show that while zero-shot and few-shot prompting lag behind fine-tuned encoder models on most of the real-world evaluation sets, they achieve better generalization on functional tests for hate speech detection. Our study also reveals that prompt design plays a critical role, with each language often requiring customized prompting techniques to maximize performance.

View on arXiv
@article{ghorbanpour2025_2505.06149,
  title={ Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study },
  author={ Faeze Ghorbanpour and Daryna Dementieva and Alexander Fraser },
  journal={arXiv preprint arXiv:2505.06149},
  year={ 2025 }
}
Comments on this paper