71

FinMMDocR: Benchmarking Financial Multimodal Reasoning with Scenario Awareness, Document Understanding, and Multi-Step Computation

Zichen Tang
Haihong E
Rongjin Li
Jiacheng Liu
Linwei Jia
Zhuodi Hao
Zhongjun Yang
Yuanze Li
Haolin Tian
Xinyi Hu
Peizhi Zhao
Yuan Liu
Zhengyu Wang
Xianghe Wang
Yiling Huang
Xueyuan Lin
Ruofei Bai
Zijian Xie
Qian Huang
Ruining Cao
Haocheng Gao
Main:7 Pages
6 Figures
Bibliography:2 Pages
8 Tables
Appendix:93 Pages
Abstract

We introduce FinMMDocR, a novel bilingual multimodal benchmark for evaluating multimodal large language models (MLLMs) on real-world financial numerical reasoning. Compared to existing benchmarks, our work delivers three major advancements. (1) Scenario Awareness: 57.9% of 1,200 expert-annotated problems incorporate 12 types of implicit financial scenarios (e.g., Portfolio Management), challenging models to perform expert-level reasoning based on assumptions; (2) Document Understanding: 837 Chinese/English documents spanning 9 types (e.g., Company Research) average 50.8 pages with rich visual elements, significantly surpassing existing benchmarks in both breadth and depth of financial documents; (3) Multi-Step Computation: Problems demand 11-step reasoning on average (5.3 extraction + 5.7 calculation steps), with 65.0% requiring cross-page evidence (2.4 pages average). The best-performing MLLM achieves only 58.0% accuracy, and different retrieval-augmented generation (RAG) methods show significant performance variations on this task. We expect FinMMDocR to drive improvements in MLLMs and reasoning-enhanced methods on complex multimodal reasoning tasks in real-world scenarios.

View on arXiv
Comments on this paper