Detecting Linguistic Bias in Government Documents Using Large language Models

This paper addresses the critical need for detecting bias in government documents, an underexplored area with significant implications for governance. Existing methodologies often overlook the unique context and far-reaching impacts of governmental documents, potentially obscuring embedded biases that shape public policy and citizen-government interactions. To bridge this gap, we introduce the Dutch Government Data for Bias Detection (DGDB), a dataset sourced from the Dutch House of Representatives and annotated for bias by experts. We fine-tune several BERT-based models on this dataset and compare their performance with that of generative language models. Additionally, we conduct a comprehensive error analysis that includes explanations of the models' predictions. Our findings demonstrate that fine-tuned models achieve strong performance and significantly outperform generative language models, indicating the effectiveness of DGDB for bias detection. This work underscores the importance of labeled datasets for bias detection in various languages and contributes to more equitable governance practices.
View on arXiv@article{swart2025_2502.13548, title={ Detecting Linguistic Bias in Government Documents Using Large language Models }, author={ Milena de Swart and Floris den Hengst and Jieying Chen }, journal={arXiv preprint arXiv:2502.13548}, year={ 2025 } }