Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications

Financial LLMs hold promise for advancing financial tasks and domain-specific applications. However, they are limited by scarce corpora, weak multimodal capabilities, and narrow evaluations, making them less suited for real-world application. To address this, we introduce \textit{Open-FinLLMs}, the first open-source multimodal financial LLMs designed to handle diverse tasks across text, tabular, time-series, and chart data, excelling in zero-shot, few-shot, and fine-tuning settings. The suite includes FinLLaMA, pre-trained on a comprehensive 52-billion-token corpus; FinLLaMA-Instruct, fine-tuned with 573K financial instructions; and FinLLaVA, enhanced with 1.43M multimodal tuning pairs for strong cross-modal reasoning. We comprehensively evaluate Open-FinLLMs across 14 financial tasks, 30 datasets, and 4 multimodal tasks in zero-shot, few-shot, and supervised fine-tuning settings, introducing two new multimodal evaluation datasets. Our results show that Open-FinLLMs outperforms afvanced financial and general LLMs such as GPT-4, across financial NLP, decision-making, and multi-modal tasks, highlighting their potential to tackle real-world challenges. To foster innovation and collaboration across academia and industry, we release all codes (this https URL) and models under OSI-approved licenses.
View on arXiv@article{huang2025_2408.11878, title={ Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications }, author={ Jimin Huang and Mengxi Xiao and Dong Li and Zihao Jiang and Yuzhe Yang and Yifei Zhang and Lingfei Qian and Yan Wang and Xueqing Peng and Yang Ren and Ruoyu Xiang and Zhengyu Chen and Xiao Zhang and Yueru He and Weiguang Han and Shunian Chen and Lihang Shen and Daniel Kim and Yangyang Yu and Yupeng Cao and Zhiyang Deng and Haohang Li and Duanyu Feng and Yongfu Dai and VijayaSai Somasundaram and Peng Lu and Guojun Xiong and Zhiwei Liu and Zheheng Luo and Zhiyuan Yao and Ruey-Ling Weng and Meikang Qiu and Kaleb E Smith and Honghai Yu and Yanzhao Lai and Min Peng and Jian-Yun Nie and Jordan W. Suchow and Xiao-Yang Liu and Benyou Wang and Alejandro Lopez-Lira and Qianqian Xie and Sophia Ananiadou and Junichi Tsujii }, journal={arXiv preprint arXiv:2408.11878}, year={ 2025 } }