33
0

Can LLMs Generate Tabular Summaries of Science Papers? Rethinking the Evaluation Protocol

Abstract

Literature review tables are essential for summarizing and comparing collections of scientific papers. We explore the task of generating tables that best fulfill a user's informational needs given a collection of scientific papers. Building on recent work (Newman et al., 2024), we extend prior approaches to address real-world complexities through a combination of LLM-based methods and human annotations. Our contributions focus on three key challenges encountered in real-world use: (i) User prompts are often under-specified; (ii) Retrieved candidate papers frequently contain irrelevant content; and (iii) Task evaluation should move beyond shallow text similarity techniques and instead assess the utility of inferred tables for information-seeking tasks (e.g., comparing papers). To support reproducible evaluation, we introduce ARXIV2TABLE, a more realistic and challenging benchmark for this task, along with a novel approach to improve literature review table generation in real-world scenarios. Our extensive experiments on this benchmark show that both open-weight and proprietary LLMs struggle with the task, highlighting its difficulty and the need for further advancements. Our dataset and code are available atthis https URL.

View on arXiv
@article{wang2025_2504.10284,
  title={ Can LLMs Generate Tabular Summaries of Science Papers? Rethinking the Evaluation Protocol },
  author={ Weiqi Wang and Jiefu Ou and Yangqiu Song and Benjamin Van Durme and Daniel Khashabi },
  journal={arXiv preprint arXiv:2504.10284},
  year={ 2025 }
}
Comments on this paper