ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.01768
  4. Cited By
The political ideology of conversational AI: Converging evidence on
  ChatGPT's pro-environmental, left-libertarian orientation

The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation

5 January 2023
Jochen Hartmann
Jasper Schwenzow
Maximilian Witte
ArXivPDFHTML

Papers citing "The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation"

50 / 97 papers shown
Title
$\texttt{SAGE}$: A Generic Framework for LLM Safety Evaluation
SAGE\texttt{SAGE}SAGE: A Generic Framework for LLM Safety Evaluation
Madhur Jindal
Hari Shrawgi
Parag Agrawal
Sandipan Dandapat
ELM
47
0
0
28 Apr 2025
Benchmarking Multi-National Value Alignment for Large Language Models
Benchmarking Multi-National Value Alignment for Large Language Models
Chengyi Ju
Weijie Shi
Chengzhong Liu
Jiaming Ji
Jipeng Zhang
...
Jia Zhu
Jiajie Xu
Yaodong Yang
Sirui Han
Yike Guo
123
0
0
17 Apr 2025
Only a Little to the Left: A Theory-grounded Measure of Political Bias in Large Language Models
Only a Little to the Left: A Theory-grounded Measure of Political Bias in Large Language Models
Mats Faulborn
Indira Sen
Max Pellert
Andreas Spitz
David Garcia
ELM
40
0
0
20 Mar 2025
Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs
Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs
Jasmin Wachter
Michael Radloff
Maja Smolej
Katharina Kinder-Kurlanda
44
0
0
17 Mar 2025
LLMs' Leaning in European Elections
LLMs' Leaning in European Elections
Federico Ricciuti
46
0
0
16 Mar 2025
Strategyproof Reinforcement Learning from Human Feedback
Thomas Kleine Buening
Jiarui Gan
Debmalya Mandal
Marta Z. Kwiatkowska
47
0
0
13 Mar 2025
AI-Facilitated Collective Judgements
Manon Revel
Théophile Pénigaud
54
0
0
06 Mar 2025
Measuring Political Preferences in AI Systems: An Integrative Approach
David Rozado
ELM
149
0
0
04 Mar 2025
Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Shanshan Xu
T. Y. S. S. Santosh
Yanai Elazar
Quirin Vogel
Barbara Plank
Matthias Grabmair
AILaw
83
0
0
25 Feb 2025
Multilingual != Multicultural: Evaluating Gaps Between Multilingual Capabilities and Cultural Alignment in LLMs
Multilingual != Multicultural: Evaluating Gaps Between Multilingual Capabilities and Cultural Alignment in LLMs
Jonathan Rystrøm
Hannah Rose Kirk
Scott A. Hale
44
2
0
23 Feb 2025
A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
Ina Dormuth
Sven Franke
Marlies Hafer
Tim Katzke
Alexander Marx
Emmanuel Müller
Daniel Neider
Markus Pauly
Jérôme Rutinowski
55
0
0
21 Feb 2025
Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective
Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective
Yuchen Wen
Keping Bi
Wei Chen
J. Guo
Xueqi Cheng
83
1
0
20 Feb 2025
Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals
Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals
Qingyang Wu
Ying Xu
Tingsong Xiao
Yunze Xiao
Yitong Li
...
Yichi Zhang
Shanghai Zhong
Yuwei Zhang
Wei Lu
Yifan Yang
75
1
0
17 Jan 2025
Extracting Affect Aggregates from Longitudinal Social Media Data with Temporal Adapters for Large Language Models
Extracting Affect Aggregates from Longitudinal Social Media Data with Temporal Adapters for Large Language Models
Georg Ahnert
Max Pellert
David García
M. Strohmaier
40
0
0
10 Jan 2025
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
Yujin Potter
Shiyang Lai
Junsol Kim
James Evans
D. Song
43
12
0
31 Oct 2024
Is GPT-4 Less Politically Biased than GPT-3.5? A Renewed Investigation
  of ChatGPT's Political Biases
Is GPT-4 Less Politically Biased than GPT-3.5? A Renewed Investigation of ChatGPT's Political Biases
Erik Weber
Jérôme Rutinowski
Niklas Jost
Markus Pauly
15
0
0
28 Oct 2024
Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina
Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina
Yuan Gao
Dokyun Lee
Gordon Burtch
Sina Fazelpour
LRM
50
7
0
25 Oct 2024
PRISM: A Methodology for Auditing Biases in Large Language Models
PRISM: A Methodology for Auditing Biases in Large Language Models
Leif Azzopardi
Yashar Moshfeghi
29
0
0
24 Oct 2024
Bias in the Mirror: Are LLMs opinions robust to their own adversarial
  attacks ?
Bias in the Mirror: Are LLMs opinions robust to their own adversarial attacks ?
Virgile Rennard
Christos Xypolopoulos
Michalis Vazirgiannis
AAML
29
0
0
17 Oct 2024
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
30
0
0
17 Oct 2024
Moral Alignment for LLM Agents
Moral Alignment for LLM Agents
Elizaveta Tennant
Stephen Hailes
Mirco Musolesi
45
0
0
02 Oct 2024
Linguini: A benchmark for language-agnostic linguistic reasoning
Linguini: A benchmark for language-agnostic linguistic reasoning
Eduardo Sánchez
Belen Alastruey
C. Ropers
Pontus Stenetorp
Mikel Artetxe
Marta R. Costa-jussá
ReLM
ELM
LRM
42
6
0
18 Sep 2024
Real or Robotic? Assessing Whether LLMs Accurately Simulate Qualities of
  Human Responses in Dialogue
Real or Robotic? Assessing Whether LLMs Accurately Simulate Qualities of Human Responses in Dialogue
Jonathan Ivey
Shivani Kumar
Jiayu Liu
Hua Shen
Sushrita Rakshit
...
Dustin Wright
Abraham Israeli
Anders Giovanni Møller
Lechen Zhang
David Jurgens
47
3
0
12 Sep 2024
LLMs generate structurally realistic social networks but overestimate political homophily
LLMs generate structurally realistic social networks but overestimate political homophily
Serina Chang
Alicja Chaszczewicz
Emma Wang
Maya Josifovska
Emma Pierson
J. Leskovec
44
6
0
29 Aug 2024
United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections
United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections
Leah von der Heyde
Anna Haensch
Alexander Wenz
Bolei Ma
54
2
0
29 Aug 2024
Evaluating Cultural Adaptability of a Large Language Model via
  Simulation of Synthetic Personas
Evaluating Cultural Adaptability of a Large Language Model via Simulation of Synthetic Personas
Louis Kwok
Michal Bravansky
Lewis D. Griffin
42
11
0
13 Aug 2024
GermanPartiesQA: Benchmarking Commercial Large Language Models for
  Political Bias and Sycophancy
GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy
Jan Batzner
Volker Stocker
Stefan Schmid
Gjergji Kasneci
23
1
0
25 Jul 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models
  (LLMs)
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
51
9
0
20 Jul 2024
Representation Bias in Political Sample Simulations with Large Language
  Models
Representation Bias in Political Sample Simulations with Large Language Models
Weihong Qi
Hanjia Lyu
Jiebo Luo
33
5
0
16 Jul 2024
Investigating LLMs as Voting Assistants via Contextual Augmentation: A
  Case Study on the European Parliament Elections 2024
Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024
Ilias Chalkidis
32
2
0
11 Jul 2024
Virtual Personas for Language Models via an Anthology of Backstories
Virtual Personas for Language Models via an Anthology of Backstories
Suhong Moon
Marwa Abdulhai
Minwoo Kang
Joseph Suh
Widyadewi Soedarmadji
Eran Kohen Behar
David M. Chan
47
11
0
09 Jul 2024
Jump Starting Bandits with LLM-Generated Prior Knowledge
Jump Starting Bandits with LLM-Generated Prior Knowledge
P. A. Alamdari
Yanshuai Cao
Kevin H. Wilson
44
1
0
27 Jun 2024
Revealing Fine-Grained Values and Opinions in Large Language Models
Revealing Fine-Grained Values and Opinions in Large Language Models
Dustin Wright
Arnav Arora
Nadav Borenstein
Srishti Yadav
Serge J. Belongie
Isabelle Augenstein
41
1
0
27 Jun 2024
Aligning Large Language Models with Diverse Political Viewpoints
Aligning Large Language Models with Diverse Political Viewpoints
Dominik Stammbach
Philine Widmer
Eunjung Cho
Çağlar Gülçehre
Elliott Ash
26
3
0
20 Jun 2024
A Complete Survey on LLM-based AI Chatbots
A Complete Survey on LLM-based AI Chatbots
Sumit Kumar Dam
Choong Seon Hong
Yu Qiao
Chaoning Zhang
57
51
0
17 Jun 2024
The Potential and Challenges of Evaluating Attitudes, Opinions, and
  Values in Large Language Models
The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
Bolei Ma
Xinpeng Wang
Tiancheng Hu
Anna Haensch
Michael A. Hedderich
Barbara Plank
Frauke Kreuter
ALM
37
2
0
16 Jun 2024
CIVICS: Building a Dataset for Examining Culturally-Informed Values in
  Large Language Models
CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models
Giada Pistilli
Alina Leidinger
Yacine Jernite
Atoosa Kasirzadeh
A. Luccioni
Margaret Mitchell
26
2
0
22 May 2024
Facilitating Opinion Diversity through Hybrid NLP Approaches
Facilitating Opinion Diversity through Hybrid NLP Approaches
Michiel van der Meer
39
0
0
15 May 2024
Cross-Care: Assessing the Healthcare Implications of Pre-training Data
  on Language Model Bias
Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias
Shan Chen
Jack Gallifant
Mingye Gao
Pedro Moreira
Nikolaj Munch
...
Hugo J. W. L. Aerts
Brian Anthony
Leo Anthony Celi
William G. La Cava
Danielle S. Bitterman
35
8
0
09 May 2024
From Persona to Personalization: A Survey on Role-Playing Language
  Agents
From Persona to Personalization: A Survey on Role-Playing Language Agents
Jiangjie Chen
Xintao Wang
Rui Xu
Siyu Yuan
Yikai Zhang
...
Caiyu Hu
Siye Wu
Scott Ren
Ziquan Fu
Yanghua Xiao
52
76
0
28 Apr 2024
High-Dimension Human Value Representation in Large Language Models
High-Dimension Human Value Representation in Large Language Models
Samuel Cahyawijaya
Delong Chen
Yejin Bang
Leila Khalatbari
Bryan Wilie
Ziwei Ji
Etsuko Ishii
Pascale Fung
65
5
0
11 Apr 2024
Attributions toward Artificial Agents in a modified Moral Turing Test
Attributions toward Artificial Agents in a modified Moral Turing Test
Eyal Aharoni
Sharlene Fernandes
Daniel J Brady
Caelan Alexander
Michael Criner
Kara Queen
Javier Rando
Eddy Nahmias
Victor Crespo
ELM
40
12
0
03 Apr 2024
The Strong Pull of Prior Knowledge in Large Language Models and Its
  Impact on Emotion Recognition
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
27
5
0
25 Mar 2024
Llama meets EU: Investigating the European Political Spectrum through
  the Lens of LLMs
Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs
Ilias Chalkidis
Stephanie Brandl
29
7
0
20 Mar 2024
Evaluating LLMs for Gender Disparities in Notable Persons
Evaluating LLMs for Gender Disparities in Notable Persons
L. Rhue
Sofie Goethals
Arun Sundararajan
41
4
0
14 Mar 2024
Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a
  Large Language Model Based on Group-Level Demographic Information
Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a Large Language Model Based on Group-Level Demographic Information
Seungjong Sun
Eungu Lee
Dongyan Nan
Xiangying Zhao
Wonbyung Lee
Bernard J. Jansen
Jang Hyun Kim
56
17
0
28 Feb 2024
Beyond prompt brittleness: Evaluating the reliability and consistency of
  political worldviews in LLMs
Beyond prompt brittleness: Evaluating the reliability and consistency of political worldviews in LLMs
Tanise Ceron
Neele Falk
Ana Barić
Dmitry Nikolaev
Sebastian Padó
36
15
0
27 Feb 2024
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations
  for Values and Opinions in Large Language Models
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
Paul Röttger
Valentin Hofmann
Valentina Pyatkin
Musashi Hinck
Hannah Rose Kirk
Hinrich Schütze
Dirk Hovy
ELM
21
53
0
26 Feb 2024
Unintended Impacts of LLM Alignment on Global Representation
Unintended Impacts of LLM Alignment on Global Representation
Michael Joseph Ryan
William B. Held
Diyi Yang
35
40
0
22 Feb 2024
How Susceptible are Large Language Models to Ideological Manipulation?
How Susceptible are Large Language Models to Ideological Manipulation?
Kai Chen
Zihao He
Jun Yan
Taiwei Shi
Kristina Lerman
32
10
0
18 Feb 2024
12
Next