24
2

PersLLM: A Personified Training Approach for Large Language Models

Abstract

Large language models (LLMs) exhibit human-like intelligence, enabling them to simulate human behavior and support various applications that require both humanized communication and extensive knowledge reserves. Efforts are made to personify LLMs with special training data or hand-crafted prompts, while correspondingly faced with challenges such as insufficient data usage or rigid behavior patterns. Consequently, personified LLMs fail to capture personified knowledge or express persistent opinion. To fully unlock the potential of LLM personification, we propose PersLLM, a framework for better data construction and model tuning. For insufficient data usage, we incorporate strategies such as Chain-of-Thought prompting and anti-induction, improving the quality of data construction and capturing the personality experiences, knowledge, and thoughts more comprehensively. For rigid behavior patterns, we design the tuning process and introduce automated DPO to enhance the specificity and dynamism of the models' personalities, which leads to a more natural opinion communication. Both automated metrics and expert human evaluations demonstrate the effectiveness of our approach. Case studies in human-machine interactions and multi-agent systems further suggest potential application scenarios and future directions for LLM personification.

View on arXiv
@article{zeng2025_2407.12393,
  title={ PersLLM: A Personified Training Approach for Large Language Models },
  author={ Zheni Zeng and Jiayi Chen and Huimin Chen and Yukun Yan and Yuxuan Chen and Zhenghao Liu and Zhiyuan Liu and Maosong Sun },
  journal={arXiv preprint arXiv:2407.12393},
  year={ 2025 }
}
Comments on this paper