25
5

Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity

Abstract

Large language models (LLMs) exhibit advanced reasoning skills, enabling robots to comprehend natural language instructions and strategically plan high-level actions through proper grounding. However, LLM hallucination may result in robots confidently executing plans that are misaligned with user goals or even unsafe in critical scenarios. Additionally, inherent ambiguity in natural language instructions can introduce uncertainty into the LLM's reasoning and planningthis http URLpropose introspective planning, a systematic approach that align LLM's uncertainty with the inherent ambiguity of the task. Our approach constructs a knowledge base containing introspective reasoning examples as post-hoc rationalizations of human-selected safe and compliant plans, which are retrieved during deployment. Evaluations on three tasks, including a newly introduced safe mobile manipulation benchmark, demonstrate that introspection substantially improves both compliance and safety over state-of-the-art LLM-based planning methods. Furthermore, we empirically show that introspective planning, in combination with conformal prediction, achieves tighter confidence bounds, maintaining statistical success guarantees while minimizing unnecessary user clarification requests. The webpage and code are accessible atthis https URL.

View on arXiv
@article{liang2025_2402.06529,
  title={ Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity },
  author={ Kaiqu Liang and Zixu Zhang and Jaime Fernández Fisac },
  journal={arXiv preprint arXiv:2402.06529},
  year={ 2025 }
}
Comments on this paper