LLMs Meet Finance: Fine-Tuning Foundation Models for the Open FinLLM Leaderboard

Abstract
This paper investigates the application of large language models (LLMs) to financial tasks. We fine-tuned foundation models using the Open FinLLM Leaderboard as a benchmark. Building on Qwen2.5 and Deepseek-R1, we employed techniques including supervised fine-tuning (SFT), direct preference optimization (DPO), and reinforcement learning (RL) to enhance their financial capabilities. The fine-tuned models demonstrated substantial performance gains across a wide range of financial tasks. Moreover, we measured the data scaling law in the financial domain. Our work demonstrates the potential of large language models (LLMs) in financial applications.
View on arXiv@article{rao2025_2504.13125, title={ LLMs Meet Finance: Fine-Tuning Foundation Models for the Open FinLLM Leaderboard }, author={ Varun Rao and Youran Sun and Mahendra Kumar and Tejas Mutneja and Agastya Mukherjee and Haizhao Yang }, journal={arXiv preprint arXiv:2504.13125}, year={ 2025 } }
Comments on this paper