What Are They Filtering Out? A Survey of Filtering Strategies for Harm Reduction in Pretraining Datasets
Data filtering strategies are a crucial component to develop safe Large Language Models (LLM), since they support the removal of harmful contents from pretraining datasets. There is a lack of research on the actual impact of these strategies on vulnerable groups to discrimination, though, and their effectiveness has not been yet systematically addressed. In this paper we present a benchmark study of data filtering strategies for harm reduction aimed at providing a systematic overview on these approaches. We survey 55 technical reports of English LMs and LLMs to identify the existing filtering strategies in literature and implement an experimental setting to test their impact against vulnerable groups. Our results show that the positive impact that strategies have in reducing harmful contents from documents has the side effect of increasing the underrepresentation of vulnerable groups to discrimination in datasets.
View on arXiv@article{stranisci2025_2503.05721, title={ What Are They Filtering Out? A Survey of Filtering Strategies for Harm Reduction in Pretraining Datasets }, author={ Marco Antonio Stranisci and Christian Hardmeier }, journal={arXiv preprint arXiv:2503.05721}, year={ 2025 } }