R.U.Psycho? Robust Unified Psychometric Testing of Language Models
Generative language models are increasingly being subjected to psychometric questionnaires intended for human testing, in efforts to establish their traits, as benchmarks for alignment, or to simulate participants in social science experiments. While this growing body of work sheds light on the likeness of model responses to those of humans, concerns are warranted regarding the rigour and reproducibility with which these experiments may be conducted. Instabilities in model outputs, sensitivity to prompt design, parameter settings, and a large number of available model versions increase documentation requirements. Consequently, generalization of findings is often complex and reproducibility is far from guaranteed. In this paper, we presentthis http URL, a framework for designing and running robust and reproducible psychometric experiments on generative language models that requires limited coding expertise. We demonstrate the capability of our framework on a variety of psychometric questionnaires, which lend support to prior findings in the literature.this http URLis available as a Python package atthis https URL.
View on arXiv@article{schelb2025_2503.10229, title={ R.U.Psycho? Robust Unified Psychometric Testing of Language Models }, author={ Julian Schelb and Orr Borin and David Garcia and Andreas Spitz }, journal={arXiv preprint arXiv:2503.10229}, year={ 2025 } }