22
3

LLMs and Finetuning: Benchmarking cross-domain performance for hate speech detection

Abstract

In the evolving landscape of online communication, hate speech detection remains a formidable challenge, further compounded by the diversity of digital platforms. This study investigates the effectiveness and adaptability of pre-trained and fine-tuned Large Language Models (LLMs) in identifying hate speech, to address two central questions: (1) To what extent does the model performance depend on the fine-tuning and training parameters?, (2) To what extent do models generalize to cross-domain hate speech detection? and (3) What are the specific features of the datasets or models that influence the generalization potential? The experiment shows that LLMs offer a huge advantage over the state-of-the-art even without pretraining. Ordinary least squares analyses suggest that the advantage of training with fine-grained hate speech labels is washed away with the increase in dataset size. While our research demonstrates the potential of large language models (LLMs) for hate speech detection, several limitations remain, particularly regarding the validity and the reproducibility of the results. We conclude with an exhaustive discussion of the challenges we faced in our experimentation and offer recommended best practices for future scholars designing benchmarking experiments of this kind.

View on arXiv
@article{nasir2025_2310.18964,
  title={ LLMs and Finetuning: Benchmarking cross-domain performance for hate speech detection },
  author={ Ahmad Nasir and Aadish Sharma and Kokil Jaidka and Saifuddin Ahmed },
  journal={arXiv preprint arXiv:2310.18964},
  year={ 2025 }
}
Comments on this paper