Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose \textsc{FLAG-Trader}, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements.
View on arXiv@article{xiong2025_2502.11433, title={ FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading }, author={ Guojun Xiong and Zhiyang Deng and Keyi Wang and Yupeng Cao and Haohang Li and Yangyang Yu and Xueqing Peng and Mingquan Lin and Kaleb E Smith and Xiao-Yang Liu and Jimin Huang and Sophia Ananiadou and Qianqian Xie }, journal={arXiv preprint arXiv:2502.11433}, year={ 2025 } }