80
0

Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models

Abstract

As large language models (LLMs) become increasingly embedded in civic, educational, and political information environments, concerns about their potential political bias have grown. Prior research often evaluates such bias through simulated personas or predefined ideological typologies, which may introduce artificial framing effects or overlook how models behave in general use scenarios. This study adopts a persona-free, topic-specific approach to evaluate political behavior in LLMs, reflecting how users typically interact with these systems-without ideological role-play or conditioning. We introduce a two-dimensional framework: one axis captures partisan orientation on highly polarized topics (e.g., abortion, immigration), and the other assesses sociopolitical engagement on less polarized issues (e.g., climate change, foreign policy). Using survey-style prompts drawn from the ANES and Pew Research Center, we analyze responses from 43 LLMs developed in the U.S., Europe, China, and the Middle East. We propose an entropy-weighted bias score to quantify both the direction and consistency of partisan alignment, and identify four behavioral clusters through engagement profiles. Findings show most models lean center-left or left ideologically and vary in their nonpartisan engagement patterns. Model scale and openness are not strong predictors of behavior, suggesting that alignment strategy and institutional context play a more decisive role in shaping political expression.

View on arXiv
@article{peng2025_2412.16746,
  title={ Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models },
  author={ Tai-Quan Peng and Kaiqi Yang and Sanguk Lee and Hang Li and Yucheng Chu and Yuping Lin and Hui Liu },
  journal={arXiv preprint arXiv:2412.16746},
  year={ 2025 }
}
Comments on this paper