ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11015
50
1

WildDoc: How Far Are We from Achieving Comprehensive and Robust Document Understanding in the Wild?

16 May 2025
An-Lan Wang
Jingqun Tang
Liao Lei
Hao Feng
Qi Liu
Xiang Fei
Jinghui Lu
Han Wang
Wen Liu
Hao Liu
Yang Liu
Xiang Bai
Can Huang
ArXivPDFHTML
Abstract

The rapid advancements in Multimodal Large Language Models (MLLMs) have significantly enhanced capabilities in Document Understanding. However, prevailing benchmarks like DocVQA and ChartQA predominantly comprise \textit{scanned or digital} documents, inadequately reflecting the intricate challenges posed by diverse real-world scenarios, such as variable illumination and physical distortions. This paper introduces WildDoc, the inaugural benchmark designed specifically for assessing document understanding in natural environments. WildDoc incorporates a diverse set of manually captured document images reflecting real-world conditions and leverages document sources from established benchmarks to facilitate comprehensive comparisons with digital or scanned documents. Further, to rigorously evaluate model robustness, each document is captured four times under different conditions. Evaluations of state-of-the-art MLLMs on WildDoc expose substantial performance declines and underscore the models' inadequate robustness compared to traditional benchmarks, highlighting the unique challenges posed by real-world document understanding. Our project homepage is available atthis https URL.

View on arXiv
@article{wang2025_2505.11015,
  title={ WildDoc: How Far Are We from Achieving Comprehensive and Robust Document Understanding in the Wild? },
  author={ An-Lan Wang and Jingqun Tang and Liao Lei and Hao Feng and Qi Liu and Xiang Fei and Jinghui Lu and Han Wang and Weiwei Liu and Hao Liu and Yuliang Liu and Xiang Bai and Can Huang },
  journal={arXiv preprint arXiv:2505.11015},
  year={ 2025 }
}
Comments on this paper