ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.16271
204
0
v1v2 (latest)

Beyond Isolated Dots: Benchmarking Structured Table Construction as Deep Knowledge Extraction

22 July 2025
Tianyun Zhong
Guozhao Mo
Yanjiang Liu
Yihan Chen
Lingdi Kong
Xuanang Chen
Yaojie Lu
Hongyu Lin
Shiwei Ye
Le Sun
Ben He
Le Sun
    RALMLMTD
ArXiv (abs)PDFHTMLGithub
Main:7 Pages
8 Figures
Bibliography:3 Pages
13 Tables
Appendix:14 Pages
Abstract

With the emergence of large language models (LLMs), there is an expectation that LLMs can effectively extract explicit information from complex real-world documents (e.g., papers, reports). However, most LLMs generate paragraph-style answers that are chaotic, disorganized, and untraceable. To bridge this gap, we introduce the Arranged and Organized Extraction Benchmark (AOE), a new bilingual benchmark with data and documents of varying lengths designed to systematically evaluate the ability of LLMs to comprehend fragmented documents and reconstruct isolated information into one organized table. Unlike conventional text-to-table tasks, which rely on fixed schema and narrow task domains, AOE includes 11 carefully crafted tasks across three diverse domains, requiring models to generate context-specific schema tailored to varied input queries. In the experiment, we evaluated both open-source and closed-source state-of-the-art LLMs. The results show that even the most advanced models struggled significantly. The benchmark is available atthis https URL.

View on arXiv
Comments on this paper