184
v1v2v3v4 (latest)

Enhancing LLM-based Hatred and Toxicity Detection with Meta-Toxic Knowledge Graph

Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Main:9 Pages
6 Figures
Bibliography:2 Pages
4 Tables
Appendix:3 Pages
Abstract

The rapid growth of social media platforms has raised significant concerns regarding online content toxicity. When Large Language Models (LLMs) are used for toxicity detection, two key challenges emerge: 1) the absence of domain-specific toxic knowledge leads to false negatives; 2) the excessive sensitivity of LLMs to toxic speech results in false positives, limiting freedom of speech. To address these issues, we propose a novel method called MetaTox, leveraging graph search on a meta-toxic knowledge graph to enhance hatred and toxicity detection. First, we construct a comprehensive meta-toxic knowledge graph by utilizing LLMs to extract toxic information through a three-step pipeline, with toxic benchmark datasets serving as corpora. Second, we query the graph via retrieval and ranking processes to supplement accurate, relevant toxic knowledge. Extensive experiments and in-depth case studies across multiple datasets demonstrate that our MetaTox significantly decreases the false positive rate while boosting overall toxicity detection performance. Our code is available atthis https URL.

View on arXiv
Comments on this paper