ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07044
57
0

DatawiseAgent: A Notebook-Centric LLM Agent Framework for Automated Data Science

10 March 2025
Ziming You
Yumiao Zhang
Dexuan Xu
Yiwei Lou
Yandong Yan
Wei Wang
H. Zhang
Yu Huang
    LLMAG
ArXivPDFHTML
Abstract

Data Science tasks are multifaceted, dynamic, and often domain-specific. Existing LLM-based approaches largely concentrate on isolated phases, neglecting the interdependent nature of many data science tasks and limiting their capacity for comprehensive end-to-end support. We propose DatawiseAgent, a notebook-centric LLM agent framework that unifies interactions among user, agent and the computational environment through markdown and executable code cells, supporting flexible and adaptive automated data science. Built on a Finite State Transducer(FST), DatawiseAgent orchestrates four stages, including DSF-like planning, incremental execution, self-debugging, and post-filtering. Specifically, the DFS-like planning stage systematically explores the solution space, while incremental execution harnesses real-time feedback and accommodates LLM's limited capabilities to progressively complete tasks. The self-debugging and post-filtering modules further enhance reliability by diagnosing and correcting errors and pruning extraneous information. Extensive experiments on diverse tasks, including data analysis, visualization, and data modeling, show that DatawiseAgent consistently outperforms or matches state-of-the-art methods across multiple model settings. These results highlight its potential to generalize across data science scenarios and lay the groundwork for more efficient, fully automated workflows.

View on arXiv
@article{you2025_2503.07044,
  title={ DatawiseAgent: A Notebook-Centric LLM Agent Framework for Automated Data Science },
  author={ Ziming You and Yumiao Zhang and Dexuan Xu and Yiwei Lou and Yandong Yan and Wei Wang and Huaming Zhang and Yu Huang },
  journal={arXiv preprint arXiv:2503.07044},
  year={ 2025 }
}
Comments on this paper