Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models

Hate speech detection is a socially sensitive and inherently subjective task, with judgments often varying based on personal traits. While prior work has examined how socio-demographic factors influence annotation, the impact of personality traits on Large Language Models (LLMs) remains largely unexplored. In this paper, we present the first comprehensive study on the role of persona prompts in hate speech classification, focusing on MBTI-based traits. A human annotation survey confirms that MBTI dimensions significantly affect labeling behavior. Extending this to LLMs, we prompt four open-source models with MBTI personas and evaluate their outputs across three hate speech datasets. Our analysis uncovers substantial persona-driven variation, including inconsistencies with ground truth, inter-persona disagreement, and logit-level biases. These findings highlight the need to carefully define persona prompts in LLM-based annotation workflows, with implications for fairness and alignment with human values.
View on arXiv@article{yuan2025_2506.08593, title={ Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models }, author={ Shuzhou Yuan and Ercong Nie and Mario Tawfelis and Helmut Schmid and Hinrich Schütze and Michael Färber }, journal={arXiv preprint arXiv:2506.08593}, year={ 2025 } }