Dexterous Manipulation through Imitation Learning: A Survey

Dexterous manipulation, which refers to the ability of a robotic hand or multi-fingered end-effector to skillfully control, reorient, and manipulate objects through precise, coordinated finger movements and adaptive force modulation, enables complex interactions similar to human hand dexterity. With recent advances in robotics and machine learning, there is a growing demand for these systems to operate in complex and unstructured environments. Traditional model-based approaches struggle to generalize across tasks and object variations due to the high dimensionality and complex contact dynamics of dexterous manipulation. Although model-free methods such as reinforcement learning (RL) show promise, they require extensive training, large-scale interaction data, and carefully designed rewards for stability and effectiveness. Imitation learning (IL) offers an alternative by allowing robots to acquire dexterous manipulation skills directly from expert demonstrations, capturing fine-grained coordination and contact dynamics while bypassing the need for explicit modeling and large-scale trial-and-error. This survey provides an overview of dexterous manipulation methods based on imitation learning, details recent advances, and addresses key challenges in the field. Additionally, it explores potential research directions to enhance IL-driven dexterous manipulation. Our goal is to offer researchers and practitioners a comprehensive introduction to this rapidly evolving domain.
View on arXiv@article{an2025_2504.03515, title={ Dexterous Manipulation through Imitation Learning: A Survey }, author={ Shan An and Ziyu Meng and Chao Tang and Yuning Zhou and Tengyu Liu and Fangqiang Ding and Shufang Zhang and Yao Mu and Ran Song and Wei Zhang and Zeng-Guang Hou and Hong Zhang }, journal={arXiv preprint arXiv:2504.03515}, year={ 2025 } }