ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12734
23
0

Pandora: A Code-Driven Large Language Model Agent for Unified Reasoning Across Diverse Structured Knowledge

17 April 2025
Yongrui Chen
Junhao He
Linbo Fu
Shenyu Zhang
Rihui Jin
Xinbang Dai
Jiaqi Li
Dehai Min
Nan Hu
Y. Zhang
Guilin Qi
Yi Huang
Tongtong Wu
ArXivPDFHTML
Abstract

Unified Structured Knowledge Reasoning (USKR) aims to answer natural language questions (NLQs) by using structured sources such as tables, databases, and knowledge graphs in a unified way. Existing USKR methods either rely on employing task-specific strategies or custom-defined representations, which struggle to leverage the knowledge transfer between different SKR tasks or align with the prior of LLMs, thereby limiting their performance. This paper proposes a novel USKR framework named \textsc{Pandora}, which takes advantage of \textsc{Python}'s \textsc{Pandas} API to construct a unified knowledge representation for alignment with LLM pre-training. It employs an LLM to generate textual reasoning steps and executable Python code for each question. Demonstrations are drawn from a memory of training examples that cover various SKR tasks, facilitating knowledge transfer. Extensive experiments on four benchmarks involving three SKR tasks demonstrate that \textsc{Pandora} outperforms existing unified frameworks and competes effectively with task-specific methods.

View on arXiv
@article{chen2025_2504.12734,
  title={ Pandora: A Code-Driven Large Language Model Agent for Unified Reasoning Across Diverse Structured Knowledge },
  author={ Yongrui Chen and Junhao He and Linbo Fu and Shenyu Zhang and Rihui Jin and Xinbang Dai and Jiaqi Li and Dehai Min and Nan Hu and Yuxin Zhang and Guilin Qi and Yi Huang and Tongtong Wu },
  journal={arXiv preprint arXiv:2504.12734},
  year={ 2025 }
}
Comments on this paper