Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.18932
Cited By
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
27 March 2024
Yejin Bang
Delong Chen
Nayeon Lee
Pascale Fung
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Measuring Political Bias in Large Language Models: What Is Said and How It Is Said"
7 / 7 papers shown
Title
Linear Representations of Political Perspective Emerge in Large Language Models
Junsol Kim
James Evans
Aaron Schein
75
2
0
03 Mar 2025
A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
Ina Dormuth
Sven Franke
Marlies Hafer
Tim Katzke
Alexander Marx
Emmanuel Müller
Daniel Neider
Markus Pauly
Jérôme Rutinowski
50
0
0
21 Feb 2025
Unmasking Conversational Bias in AI Multiagent Systems
Erica Coppolillo
Giuseppe Manco
Luca Maria Aiello
LLMAG
52
0
0
24 Jan 2025
The Political Preferences of LLMs
David Rozado
30
35
0
02 Feb 2024
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
61
84
0
14 Oct 2022
Neural Media Bias Detection Using Distant Supervision With BABE -- Bias Annotations By Experts
Timo Spinde
Manuel Plank
Jan-David Krieger
Terry Ruas
Bela Gipp
Akiko Aizawa
27
67
0
29 Sep 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
65
129
0
18 May 2022
1