0
0

When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search

William A. Ingram
Bipasha Banerjee
Edward A. Fox
Main:7 Pages
Bibliography:1 Pages
5 Tables
Abstract

Large language models (LLMs) are increasingly used to assign document relevance labels in information retrieval pipelines, especially in domains lacking human-labeled data. However, different models often disagree on borderline cases, raising concerns about how such disagreement affects downstream retrieval. This study examines labeling disagreement between two open-weight LLMs, LLaMA and Qwen, on a corpus of scholarly abstracts related to Sustainable Development Goals (SDGs) 1, 3, and 7. We isolate disagreement subsets and examine their lexical properties, rank-order behavior, and classification predictability. Our results show that model disagreement is systematic, not random: disagreement cases exhibit consistent lexical patterns, produce divergent top-ranked outputs under shared scoring functions, and are distinguishable with AUCs above 0.74 using simple classifiers. These findings suggest that LLM-based filtering introduces structured variability in document retrieval, even under controlled prompting and shared ranking logic. We propose using classification disagreement as an object of analysis in retrieval evaluation, particularly in policy-relevant or thematic search tasks.

View on arXiv
@article{ingram2025_2507.02139,
  title={ When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search },
  author={ William A. Ingram and Bipasha Banerjee and Edward A. Fox },
  journal={arXiv preprint arXiv:2507.02139},
  year={ 2025 }
}
Comments on this paper