ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.05995
18
3

In-game Toxic Language Detection: Shared Task and Attention Residuals

11 November 2022
Yuanzhe Jia
Weixuan Wu
Feiqi Cao
S. Han
ArXivPDFHTML
Abstract

In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The relevant code is publicly available on GitHub:this https URL

View on arXiv
@article{jia2025_2211.05995,
  title={ In-game Toxic Language Detection: Shared Task and Attention Residuals },
  author={ Yuanzhe Jia and Weixuan Wu and Feiqi Cao and Soyeon Caren Han },
  journal={arXiv preprint arXiv:2211.05995},
  year={ 2025 }
}
Comments on this paper