57
0

Self-Corrective Task Planning by Inverse Prompting with Large Language Models

Abstract

In robot task planning, large language models (LLMs) have shown significant promise in generating complex and long-horizon action sequences. However, it is observed that LLMs often produce responses that sound plausible but are not accurate. To address these problems, existing methods typically employ predefined error sets or external knowledge sources, requiring human efforts and computation resources. Recently, self-correction approaches have emerged, where LLM generates and refines plans, identifying errors by itself. Despite their effectiveness, they are more prone to failures in correction due to insufficient reasoning. In this paper, we introduce InversePrompt, a novel self-corrective task planning approach that leverages inverse prompting to enhance interpretability. Our method incorporates reasoning steps to provide clear, interpretable feedback. It generates inverse actions corresponding to the initially generated actions and verifies whether these inverse actions can restore the system to its original state, explicitly validating the logical coherence of the generatedthis http URLresults on benchmark datasets show an average 16.3% higher success rate over existing LLM-based task planning methods. Our approach offers clearer justifications for feedback in real-world environments, resulting in more successful task completion than existing self-correction approaches across various scenarios.

View on arXiv
@article{lee2025_2503.07317,
  title={ Self-Corrective Task Planning by Inverse Prompting with Large Language Models },
  author={ Jiho Lee and Hayun Lee and Jonghyeon Kim and Kyungjae Lee and Eunwoo Kim },
  journal={arXiv preprint arXiv:2503.07317},
  year={ 2025 }
}
Comments on this paper