Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.07333
Cited By
The Self-Perception and Political Biases of ChatGPT
14 April 2023
Jérôme Rutinowski
Sven Franke
Jan Endendyk
Ina Dormuth
Markus Pauly
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Self-Perception and Political Biases of ChatGPT"
8 / 8 papers shown
Title
SAGE
\texttt{SAGE}
SAGE
: A Generic Framework for LLM Safety Evaluation
Madhur Jindal
Hari Shrawgi
Parag Agrawal
Sandipan Dandapat
ELM
47
0
0
28 Apr 2025
A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
Ina Dormuth
Sven Franke
Marlies Hafer
Tim Katzke
Alexander Marx
Emmanuel Müller
Daniel Neider
Markus Pauly
Jérôme Rutinowski
47
0
0
21 Feb 2025
Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective
Yuchen Wen
Keping Bi
Wei Chen
J. Guo
Xueqi Cheng
75
1
0
20 Feb 2025
The high dimensional psychological profile and cultural bias of ChatGPT
Hang Yuan
Zhongyue Che
Shao Li
Yue Zhang
Xiaomeng Hu
Siyang Luo
20
2
0
06 May 2024
Synocene, Beyond the Anthropocene: De-Anthropocentralising Human-Nature-AI Interaction
Isabelle Hupont
Marina Wainer
Sam Nester
Sylvie Tissot
Lucía Iglesias-Blanco
S. Baldassarri
17
2
0
13 Dec 2023
The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations
A. Salinas
Parth Vipul Shah
Yuzhong Huang
Robert McCormack
Fred Morstatter
19
32
0
03 Aug 2023
I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models
Max Reuter
William B. Schulze
8
4
0
06 Jun 2023
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
27
81
0
19 May 2023
1