28
0

A Multi-Modal Interaction Framework for Efficient Human-Robot Collaborative Shelf Picking

Abstract

The growing presence of service robots in human-centric environments, such as warehouses, demands seamless and intuitive human-robot collaboration. In this paper, we propose a collaborative shelf-picking framework that combines multimodal interaction, physics-based reasoning, and task division for enhanced human-robot teamwork.The framework enables the robot to recognize human pointing gestures, interpret verbal cues and voice commands, and communicate through visual and auditory feedback. Moreover, it is powered by a Large Language Model (LLM) which utilizes Chain of Thought (CoT) and a physics-based simulation engine for safely retrieving cluttered stacks of boxes on shelves, relationship graph for sub-task generation, extraction sequence planning and decision making. Furthermore, we validate the framework through real-world shelf picking experiments such as 1) Gesture-Guided Box Extraction, 2) Collaborative Shelf Clearing and 3) Collaborative Stability Assistance.

View on arXiv
@article{pathak2025_2504.06593,
  title={ A Multi-Modal Interaction Framework for Efficient Human-Robot Collaborative Shelf Picking },
  author={ Abhinav Pathak and Kalaichelvi Venkatesan and Tarek Taha and Rajkumar Muthusamy },
  journal={arXiv preprint arXiv:2504.06593},
  year={ 2025 }
}
Comments on this paper