39
3

On the Robustness of Language Models for Tabular Question Answering

Abstract

Large Language Models (LLMs), already shown to ace various text comprehension tasks have also remarkably been shown to tackle table comprehension tasks without specific training. While previous research has explored LLM capabilities with tabular dataset tasks, our study assesses the influence of \textit{in-context learning}, \textit{model scale}, \textit{instruction tuning}, and \textit{domain biases} on Tabular Question Answering (TQA). We evaluate the robustness of LLMs on Wikipedia-based \textbf{WTQ}, financial report-based \textbf{TAT-QA}, and scientific claims-based \textbf{SCITAB}, TQA datasets, focusing on their ability to interpret tabular data under various augmentations and perturbations robustly. Our findings indicate that instructions significantly enhance performance, with recent models exhibiting greater robustness over earlier versions. However, data contamination and practical reliability issues persist, especially with \textbf{WTQ}. We highlight the need for improved methodologies, including structure-aware self-attention mechanisms and better handling of domain-specific tabular data, to develop more reliable LLMs for table comprehension.

View on arXiv
@article{bhandari2025_2406.12719,
  title={ On the Robustness of Language Models for Tabular Question Answering },
  author={ Kushal Raj Bhandari and Sixue Xing and Soham Dan and Jianxi Gao },
  journal={arXiv preprint arXiv:2406.12719},
  year={ 2025 }
}
Comments on this paper