Bias Amplification: Large Language Models as Increasingly Biased Media

Model collapse, a phenomenon where models degrade in performance due to indiscriminate use of synthetic data is well studied. However, its role in bias amplification, the progressive reinforcement of preexisting social biases in Large Language Models (LLMs) remains underexplored. In this paper, we formally define the conditions for bias amplification and demonstrate through statistical simulations that bias can intensify even in the absence of sampling errors, the primary driver of model collapse. Empirically, we investigate political bias amplification in GPT2 using a custom built benchmark for sentence continuation tasks. Our findings reveal a progressively increasing right-leaning bias. Furthermore, we evaluate three mitigation strategies, Overfitting, Preservation, and Accumulation, and show that bias amplification persists even when model collapse is mitigated. Finally, a mechanistic interpretation identifies distinct sets of neurons responsible for model collapse and bias amplification, suggesting they arise from different underlying mechanisms.
View on arXiv@article{wang2025_2410.15234, title={ Bias Amplification: Large Language Models as Increasingly Biased Media }, author={ Ze Wang and Zekun Wu and Jeremy Zhang and Xin Guan and Navya Jain and Skylar Lu and Saloni Gupta and Adriano Koshiyama }, journal={arXiv preprint arXiv:2410.15234}, year={ 2025 } }