ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04396
78
0

TableLoRA: Low-rank Adaptation on Table Structure Understanding for Large Language Models

6 March 2025
Xinyi He
Yihao Liu
Mengyu Zhou
Yeye He
Haoyu Dong
Shi Han
Zejian Yuan
Dongmei Zhang
    LMTD
ArXivPDFHTML
Abstract

Tabular data are crucial in many fields and their understanding by large language models (LLMs) under high parameter efficiency paradigm is important. However, directly applying parameter-efficient fine-tuning (PEFT) techniques to tabular tasks presents significant challenges, particularly in terms of better table serialization and the representation of two-dimensional structured information within a one-dimensional sequence. To address this, we propose TableLoRA, a module designed to improve LLMs' understanding of table structure during PEFT. It incorporates special tokens for serializing tables with special token encoder and uses 2D LoRA to encode low-rank information on cell positions. Experiments on four tabular-related datasets demonstrate that TableLoRA consistently outperforms vanilla LoRA and surpasses various table encoding methods tested in control experiments. These findings reveal that TableLoRA, as a table-specific LoRA, enhances the ability of LLMs to process tabular data effectively, especially in low-parameter settings, demonstrating its potential as a robust solution for handling table-related tasks.

View on arXiv
@article{he2025_2503.04396,
  title={ TableLoRA: Low-rank Adaptation on Table Structure Understanding for Large Language Models },
  author={ Xinyi He and Yihao Liu and Mengyu Zhou and Yeye He and Haoyu Dong and Shi Han and Zejian Yuan and Dongmei Zhang },
  journal={arXiv preprint arXiv:2503.04396},
  year={ 2025 }
}
Comments on this paper