This study examines the risk preferences of Large Language Models (LLMs) and how aligning them with human ethical standards affects their economic decision-making. Analyzing 30 LLMs reveals a range of inherent risk profiles, from risk-averse to risk-seeking. We find that aligning LLMs with human values, focusing on harmlessness, helpfulness, and honesty, shifts them towards risk aversion. While some alignment improves investment forecast accuracy, excessive alignment leads to overly cautious predictions, potentially resulting in severe underinvestment. Our findings highlight the need for a nuanced approach that balances ethical alignment with the specific requirements of economic domains when using LLMs in finance.
View on arXiv