119

Model Bias in NLP - Application to Hate Speech Classification

Abstract

This document sums up our results forthe NLP lecture at ETH in the springsemester 2021. In this work, a BERTbased neural network model (Devlin et al.,2018) is applied to the JIGSAW dataset (Jigsaw/Conversation AI, 2019) in or-der to create a model identifying hate-ful and toxic comments (strictly seper-ated from offensive language) in onlinesocial platforms (English language), inthis case Twitter. Three other neural net-work architectures and a GPT-2 (Radfordet al., 2019) model are also applied onthe provided data set in order to com-pare these different models. The trainedBERT model is then applied on two dif-ferent data sets to evaluate its generali-sation power, namely on another Twitterdata set (Tom Davidson, 2017) (Davidsonet al., 2017) and the data set HASOC 2019(Thomas Mandl, 2019) (Mandl et al.,2019) which includes Twitter and alsoFacebook comments; we focus on the En-glish HASOC 2019 data.In addition,it can be shown that by fine-tuning thetrained BERT model on these two datasets by applying different transfer learn-ing scenarios via retraining partial or alllayers the predictive scores improve com-pared to simply applying the model pre-trained on the JIGSAW data set. Withour results, we get precisions from 64% toaround 90% while still achieving accept-able recall values of at least lower 60s%, proving that BERT is suitable for real usecases in social platforms.

View on arXiv
Comments on this paper