TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios

We introduce TableLLM, a robust large language model (LLM) with 8 billion parameters, purpose-built for proficiently handling tabular data manipulation tasks, whether they are embedded within documents or spreadsheets, catering to real-world office scenarios. We propose a distant supervision method for training, which comprises a reasoning process extension strategy, aiding in training LLMs to understand reasoning patterns more effectively as well as a cross-way validation strategy, ensuring the quality of the automatically generated data. To evaluate the performance of TableLLM, we have crafted benchmarks tailored to address both document and spreadsheet formats as well as constructed a well-organized evaluation pipeline capable of handling both scenarios. Thorough evaluations underscore the advantages of TableLLM when compared to various existing general-purpose and tabular data-focused LLMs. We have publicly released the model checkpoint, source code, benchmarks, and a web application for user interaction. Our codes and data are publicly available atthis https URL.
View on arXiv@article{zhang2025_2403.19318, title={ TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios }, author={ Xiaokang Zhang and Sijia Luo and Bohan Zhang and Zeyao Ma and Jing Zhang and Yang Li and Guanlin Li and Zijun Yao and Kangli Xu and Jinchang Zhou and Daniel Zhang-Li and Jifan Yu and Shu Zhao and Juanzi Li and Jie Tang }, journal={arXiv preprint arXiv:2403.19318}, year={ 2025 } }