ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23621
46
1

Table-R1: Inference-Time Scaling for Table Reasoning

29 May 2025
Zheyuan Yang
Lyuhao Chen
Arman Cohan
Yilun Zhao
    LMTD
    ReLM
    LRM
ArXivPDFHTML
Abstract

In this work, we present the first study to explore inference-time scaling on table reasoning tasks. We develop and evaluate two post-training strategies to enable inference-time scaling: distillation from frontier model reasoning traces and reinforcement learning with verifiable rewards (RLVR). For distillation, we introduce a large-scale dataset of reasoning traces generated by DeepSeek-R1, which we use to fine-tune LLMs into the Table-R1-SFT model. For RLVR, we propose task-specific verifiable reward functions and apply the GRPO algorithm to obtain the Table-R1-Zero model. We evaluate our Table-R1-series models across diverse table reasoning tasks, including short-form QA, fact verification, and free-form QA. Notably, the Table-R1-Zero model matches or exceeds the performance of GPT-4.1 and DeepSeek-R1, while using only a 7B-parameter LLM. It also demonstrates strong generalization to out-of-domain datasets. Extensive ablation and qualitative analyses reveal the benefits of instruction tuning, model architecture choices, and cross-task generalization, as well as emergence of essential table reasoning skills during RL training.

View on arXiv
@article{yang2025_2505.23621,
  title={ Table-R1: Inference-Time Scaling for Table Reasoning },
  author={ Zheyuan Yang and Lyuhao Chen and Arman Cohan and Yilun Zhao },
  journal={arXiv preprint arXiv:2505.23621},
  year={ 2025 }
}
Comments on this paper