583

Reducing Sentiment Bias in Language Models via Counterfactual Evaluation

Findings (Findings), 2019
Abstract

Recent advances in language model architectures and the availability of large text corpora have driven progress on automatic text generation. While this results in models that are capable of generating coherent texts, it also prompts models to internalize social biases present in the training corpus. This paper aims to quantify and reduce a particular type of bias exhibited by language models: bias with respect to sentiment. Given a conditioning context (e.g. a writing prompt) and a language model, we analyze if (and how) the sentiment of the generated text is affected by changes in values of sensitive attributes (e.g. country names, occupations, genders) in the conditioning context using a form of counterfactual evaluation. We quantify bias by adopting individual and group fairness metrics from the fair machine learning literature, and demonstrate that large-scale models trained on two different corpora (news articles, and Wikipedia) exhibit considerable sentiment bias. We then propose the use of a sentiment prediction-derived regularization on the language model's latent representations. The regularization improves fairness metrics by 14--16% while retaining comparable levels of perplexity and semantic similarity.

View on arXiv
Comments on this paper