ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02016
2
0

Effective Explanations for Belief-Desire-Intention Robots: When and What to Explain

2 July 2025
Cong Wang
Roberto Calandra
Verena Klös
ArXiv (abs)PDFHTML
Main:5 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

When robots perform complex and context-dependent tasks in our daily lives, deviations from expectations can confuse users. Explanations of the robot's reasoning process can help users to understand the robot intentions. However, when to provide explanations and what they contain are important to avoid user annoyance. We have investigated user preferences for explanation demand and content for a robot that helps with daily cleaning tasks in a kitchen. Our results show that users want explanations in surprising situations and prefer concise explanations that clearly state the intention behind the confusing action and the contextual factors that were relevant to this decision. Based on these findings, we propose two algorithms to identify surprising actions and to construct effective explanations for Belief-Desire-Intention (BDI) robots. Our algorithms can be easily integrated in the BDI reasoning process and pave the way for better human-robot interaction with context- and user-specific explanations.

View on arXiv
@article{wang2025_2507.02016,
  title={ Effective Explanations for Belief-Desire-Intention Robots: When and What to Explain },
  author={ Cong Wang and Roberto Calandra and Verena Klös },
  journal={arXiv preprint arXiv:2507.02016},
  year={ 2025 }
}
Comments on this paper