Human Preferences for Constructive Interactions in Language Model Alignment

As large language models (LLMs) enter the mainstream, aligning them to foster constructive dialogue rather than exacerbate societal divisions is critical. Using an individualized and multicultural alignment dataset of over 7,500 conversations of individuals from 74 countries engaging with 21 LLMs, we examined how linguistic attributes linked to constructive interactions are reflected in human preference data used for training AI. We found that users consistently preferred well-reasoned and nuanced responses while rejecting those high in personal storytelling. However, users who believed that AI should reflect their values tended to place less preference on reasoning in LLM responses and more on curiosity. Encouragingly, we observed that users could set the tone for how constructive their conversation would be, as LLMs mirrored linguistic attributes, including toxicity, in user queries.
View on arXiv@article{kyrychenko2025_2503.16480, title={ Human Preferences for Constructive Interactions in Language Model Alignment }, author={ Yara Kyrychenko and Jon Roozenbeek and Brandon Davidson and Sander van der Linden and Ramit Debnath }, journal={arXiv preprint arXiv:2503.16480}, year={ 2025 } }