HCT-QA: A Benchmark for Question Answering on Human-Centric Tables

Tabular data embedded within PDF files, web pages, and other document formats are prevalent across numerous sectors such as government, engineering, science, and business. These human-centric tables (HCTs) possess a unique combination of high business value, intricate layouts, limited operational power at scale, and sometimes serve as the only data source for critical insights. However, their complexity poses significant challenges to traditional data extraction, processing, and querying methods. While current solutions focus on transforming these tables into relational formats for SQL queries, they fall short in handling the diverse and complex layouts of HCTs and hence being amenable to querying. This paper describes HCT-QA, an extensive benchmark of HCTs, natural language queries, and related answers on thousands of tables. Our dataset includes 2,188 real-world HCTs with 9,835 QA pairs and 4,679 synthetic tables with 67.5K QA pairs. While HCTs can be potentially processed by different type of query engines, in this paper, we focus on Large Language Models as potential engines and assess their ability in processing and querying such tables.
View on arXiv@article{ahmad2025_2504.20047, title={ HCT-QA: A Benchmark for Question Answering on Human-Centric Tables }, author={ Mohammad S. Ahmad and Zan A. Naeem and Michaël Aupetit and Ahmed Elmagarmid and Mohamed Eltabakh and Xiasong Ma and Mourad Ouzzani and Chaoyi Ruan }, journal={arXiv preprint arXiv:2504.20047}, year={ 2025 } }