ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06646
45
0

Evaluating and Aligning Human Economic Risk Preferences in LLMs

9 March 2025
J. Liu
Yi Yang
K. Tam
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are increasingly used in decision-making scenarios that involve risk assessment, yet their alignment with human economic rationality remains unclear. In this study, we investigate whether LLMs exhibit risk preferences consistent with human expectations across different personas. Specifically, we assess whether LLM-generated responses reflect appropriate levels of risk aversion or risk-seeking behavior based on individual's persona. Our results reveal that while LLMs make reasonable decisions in simplified, personalized risk contexts, their performance declines in more complex economic decision-making tasks. To address this, we propose an alignment method designed to enhance LLM adherence to persona-specific risk preferences. Our approach improves the economic rationality of LLMs in risk-related applications, offering a step toward more human-aligned AI decision-making.

View on arXiv
@article{liu2025_2503.06646,
  title={ Evaluating and Aligning Human Economic Risk Preferences in LLMs },
  author={ Jiaxin Liu and Yi Yang and Kar Yan Tam },
  journal={arXiv preprint arXiv:2503.06646},
  year={ 2025 }
}
Comments on this paper