41
4

Role-Play Paradox in Large Language Models: Reasoning Performance Gains and Ethical Dilemmas

Jinman Zhao
Zifan Qian
Linbo Cao
Yining Wang
Yitian Ding
Yulan Hu
Zeyu Zhang
Zeyong Jin
Abstract

Role-play in large language models (LLMs) enhances their ability to generate contextually relevant and high-quality responses by simulating diverse cognitive perspectives. However, our study identifies significant risks associated with this technique. First, we demonstrate that autotuning, a method used to auto-select models' roles based on the question, can lead to the generation of harmful outputs, even when the model is tasked with adopting neutral roles. Second, we investigate how different roles affect the likelihood of generating biased or harmful content. Through testing on benchmarks containing stereotypical and harmful questions, we find that role-play consistently amplifies the risk of biased outputs. Our results underscore the need for careful consideration of both role simulation and tuning processes when deploying LLMs in sensitive or high-stakes contexts.

View on arXiv
@article{zhao2025_2409.13979,
  title={ Role-Play Paradox in Large Language Models: Reasoning Performance Gains and Ethical Dilemmas },
  author={ Jinman Zhao and Zifan Qian and Linbo Cao and Yining Wang and Yitian Ding and Yulan Hu and Zeyu Zhang and Zeyong Jin },
  journal={arXiv preprint arXiv:2409.13979},
  year={ 2025 }
}
Comments on this paper