ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.03636
24
1

SheetAgent: Towards A Generalist Agent for Spreadsheet Reasoning and Manipulation via Large Language Models

6 March 2024
Yibin Chen
Yifu Yuan
Zeyu Zhang
Yan Zheng
Jinyi Liu
Fei Ni
Jianye Hao
Hangyu Mao
Fuzheng Zhang
    LMTD
    LRM
ArXivPDFHTML
Abstract

Spreadsheets are ubiquitous across the World Wide Web, playing a critical role in enhancing work efficiency across various domains. Large language model (LLM) has been recently attempted for automatic spreadsheet manipulation but has not yet been investigated in complicated and realistic tasks where reasoning challenges exist (e.g., long horizon manipulation with multi-step reasoning and ambiguous requirements). To bridge the gap with the real-world requirements, we introduce SheetRM, a benchmark featuring long-horizon and multi-category tasks with reasoning-dependent manipulation caused by real-life challenges. To mitigate the above challenges, we further propose SheetAgent, a novel autonomous agent that utilizes the power of LLMs. SheetAgent consists of three collaborative modules: Planner, Informer, and Retriever, achieving both advanced reasoning and accurate manipulation over spreadsheets without human interaction through iterative task reasoning and reflection. Extensive experiments demonstrate that SheetAgent delivers 20--40\% pass rate improvements on multiple benchmarks over baselines, achieving enhanced precision in spreadsheet manipulation and demonstrating superior table reasoning abilities. More details and visualizations are available at the project website:this https URL. The datasets and source code are available atthis https URL.

View on arXiv
@article{chen2025_2403.03636,
  title={ SheetAgent: Towards A Generalist Agent for Spreadsheet Reasoning and Manipulation via Large Language Models },
  author={ Yibin Chen and Yifu Yuan and Zeyu Zhang and Yan Zheng and Jinyi Liu and Fei Ni and Jianye Hao and Hangyu Mao and Fuzheng Zhang },
  journal={arXiv preprint arXiv:2403.03636},
  year={ 2025 }
}
Comments on this paper