23
1

EqualizeIR: Mitigating Linguistic Biases in Retrieval Models

Abstract

This study finds that existing information retrieval (IR) models show significant biases based on the linguistic complexity of input queries, performing well on linguistically simpler (or more complex) queries while underperforming on linguistically more complex (or simpler) queries. To address this issue, we propose EqualizeIR, a framework to mitigate linguistic biases in IR models. EqualizeIR uses a linguistically biased weak learner to capture linguistic biases in IR datasets and then trains a robust model by regularizing and refining its predictions using the biased weak learner. This approach effectively prevents the robust model from overfitting to specific linguistic patterns in data. We propose four approaches for developing linguistically-biased models. Extensive experiments on several datasets show that our method reduces performance disparities across linguistically simple and complex queries, while improving overall retrieval performance.

View on arXiv
@article{cheng2025_2504.07115,
  title={ EqualizeIR: Mitigating Linguistic Biases in Retrieval Models },
  author={ Jiali Cheng and Hadi Amiri },
  journal={arXiv preprint arXiv:2504.07115},
  year={ 2025 }
}
Comments on this paper