Silencing Empowerment, Allowing Bigotry: Auditing the Moderation of Hate Speech on Twitch

To meet the demands of content moderation, online platforms have resorted to automated systems. Newer forms of real-time engagement(, users commenting on live streams) on platforms like Twitch exert additional pressures on the latency expected of such moderation systems. Despite their prevalence, relatively little is known about the effectiveness of these systems. In this paper, we conduct an audit of Twitch's automated moderation tool () to investigate its effectiveness in flagging hateful content. For our audit, we create streaming accounts to act as siloed test beds, and interface with the live chat using Twitch's APIs to send over comments collated from datasets. We measure 's accuracy in flagging blatantly hateful content containing misogyny, racism, ableism and homophobia. Our experiments reveal that a large fraction of hateful messages, up to on some datasets, . Contextual addition of slurs to these messages results in removal, revealing 's reliance on slurs as a moderation signal. We also find that contrary to Twitch's community guidelines, blocks up to of benign examples that use sensitive words in pedagogical or empowering contexts. Overall, our audit points to large gaps in 's capabilities and underscores the importance for such systems to understand context effectively.
View on arXiv@article{shukla2025_2506.07667, title={ Silencing Empowerment, Allowing Bigotry: Auditing the Moderation of Hate Speech on Twitch }, author={ Prarabdh Shukla and Wei Yin Chong and Yash Patel and Brennan Schaffner and Danish Pruthi and Arjun Bhagoji }, journal={arXiv preprint arXiv:2506.07667}, year={ 2025 } }