Enhancing Explainability with Multimodal Context Representations for Smarter Robots

Artificial Intelligence (AI) has significantly advanced in recent years, driving innovation across various fields, especially in robotics. Even though robots can perform complex tasks with increasing autonomy, challenges remain in ensuring explainability and user-centered design for effective interaction. A key issue in Human-Robot Interaction (HRI) is enabling robots to effectively perceive and reason over multimodal inputs, such as audio and vision, to foster trust and seamless collaboration. In this paper, we propose a generalized and explainable multimodal framework for context representation, designed to improve the fusion of speech and vision modalities. We introduce a use case on assessing 'Relevance' between verbal utterances from the user and visual scene perception of the robot. We present our methodology with a Multimodal Joint Representation module and a Temporal Alignment module, which can allow robots to evaluate relevance by temporally aligning multimodal inputs. Finally, we discuss how the proposed framework for context representation can help with various aspects of explainability in HRI.
View on arXiv@article{viswanath2025_2503.16467, title={ Enhancing Explainability with Multimodal Context Representations for Smarter Robots }, author={ Anargh Viswanath and Lokesh Veeramacheneni and Hendrik Buschmeier }, journal={arXiv preprint arXiv:2503.16467}, year={ 2025 } }