37
0

SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations

Abstract

Detecting toxic language including sexism, harassment and abusive behaviour, remains a critical challenge, particularly in its subtle and context-dependent forms. Existing approaches largely focus on isolated message-level classification, overlooking toxicity that emerges across conversational contexts. To promote and enable future research in this direction, we introduce SafeSpeech, a comprehensive platform for toxic content detection and analysis that bridges message-level and conversation-level insights. The platform integrates fine-tuned classifiers and large language models (LLMs) to enable multi-granularity detection, toxic-aware conversation summarization, and persona profiling. SafeSpeech also incorporates explainability mechanisms, such as perplexity gain analysis, to highlight the linguistic elements driving predictions. Evaluations on benchmark datasets, including EDOS, OffensEval, and HatEval, demonstrate the reproduction of state-of-the-art performance across multiple tasks, including fine-grained sexism detection.

View on arXiv
@article{tan2025_2503.06534,
  title={ SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations },
  author={ Xingwei Tan and Chen Lyu and Hafiz Muhammad Umer and Sahrish Khan and Mahathi Parvatham and Lois Arthurs and Simon Cullen and Shelley Wilson and Arshad Jhumka and Gabriele Pergola },
  journal={arXiv preprint arXiv:2503.06534},
  year={ 2025 }
}
Comments on this paper