ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19528
42
0

AmpleHate: Amplifying the Attention for Versatile Implicit Hate Detection

26 May 2025
Yejin Lee
Joonghyuk Hahn
Hyeseon Ahn
Yo-Sub Han
ArXivPDFHTML
Abstract

Implicit hate speech detection is challenging due to its subtlety and reliance on contextual interpretation rather than explicit offensive words. Current approaches rely on contrastive learning, which are shown to be effective on distinguishing hate and non-hate sentences. Humans, however, detect implicit hate speech by first identifying specific targets within the text and subsequently interpreting how these target relate to their surrounding context. Motivated by this reasoning process, we propose AmpleHate, a novel approach designed to mirror human inference for implicit hate detection. AmpleHate identifies explicit target using a pretrained Named Entity Recognition model and capture implicit target information via [CLS] tokens. It computes attention-based relationships between explicit, implicit targets and sentence context and then, directly injects these relational vectors into the final sentence representation. This amplifies the critical signals of target-context relations for determining implicit hate. Experiments demonstrate that AmpleHate achieves state-of-the-art performance, outperforming contrastive learning baselines by an average of 82.14% and achieve faster convergence. Qualitative analyses further reveal that attention patterns produced by AmpleHate closely align with human judgement, underscoring its interpretability and robustness.

View on arXiv
@article{lee2025_2505.19528,
  title={ AmpleHate: Amplifying the Attention for Versatile Implicit Hate Detection },
  author={ Yejin Lee and Joonghyuk Hahn and Hyeseon Ahn and Yo-Sub Han },
  journal={arXiv preprint arXiv:2505.19528},
  year={ 2025 }
}
Comments on this paper