The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly powerconversational agents used by the general public world-wide, the synthetic personality traits embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a novel and comprehensive psychometrically valid and reliable methodology for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method to 18 LLMs, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss the application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
View on arXiv@article{serapio-garcía2025_2307.00184, title={ Personality Traits in Large Language Models }, author={ Greg Serapio-García and Mustafa Safdari and Clément Crepy and Luning Sun and Stephen Fitz and Peter Romero and Marwa Abdulhai and Aleksandra Faust and Maja Matarić }, journal={arXiv preprint arXiv:2307.00184}, year={ 2025 } }