Unequal Opportunities: Examining the Bias in Geographical Recommendations by Large Language Models

Recent advancements in Large Language Models (LLMs) have made them a popular information-seeking tool among end users. However, the statistical training methods for LLMs have raised concerns about their representation of under-represented topics, potentially leading to biases that could influence real-world decisions and opportunities. These biases could have significant economic, social, and cultural impacts as LLMs become more prevalent, whether through direct interactions--such as when users engage with chatbots or automated assistants--or through their integration into third-party applications (as agents), where the models influence decision-making processes and functionalities behind the scenes. Our study examines the biases present in LLMs recommendations of U.S. cities and towns across three domains: relocation, tourism, and starting a business. We explore two key research questions: (i) How similar LLMs responses are, and (ii) How this similarity might favor areas with certain characteristics over others, introducing biases. We focus on the consistency of LLMs responses and their tendency to over-represent or under-represent specific locations. Our findings point to consistent demographic biases in these recommendations, which could perpetuate a ``rich-get-richer'' effect that widens existing economic disparities.
View on arXiv@article{dudy2025_2504.05325, title={ Unequal Opportunities: Examining the Bias in Geographical Recommendations by Large Language Models }, author={ Shiran Dudy and Thulasi Tholeti and Resmi Ramachandranpillai and Muhammad Ali and Toby Jia-Jun Li and Ricardo Baeza-Yates }, journal={arXiv preprint arXiv:2504.05325}, year={ 2025 } }