RoboHanger: Learning Generalizable Robotic Hanger Insertion for Diverse Garments

For the task of hanging clothes, learning how to insert a hanger into a garment is a crucial step, but has rarely been explored in robotics. In this work, we address the problem of inserting a hanger into various unseen garments that are initially laid flat on a table. This task is challenging due to its long-horizon nature, the high degrees of freedom of the garments and the lack of data. To simplify the learning process, we first propose breaking the task into several subtasks. Then, we formulate each subtask as a policy learning problem and propose a low-dimensional action parameterization. To overcome the challenge of limited data, we build our own simulator and create 144 synthetic clothing assets to effectively collect high-quality training data. Our approach uses single-view depth images and object masks as input, which mitigates the Sim2Real appearance gap and achieves high generalization capabilities for new garments. Extensive experiments in both simulation and the real world validate our proposed method. By training on various garments in the simulator, our method achieves a 75\% success rate with 8 different unseen garments in the real world.
View on arXiv@article{chen2025_2412.01083, title={ RoboHanger: Learning Generalizable Robotic Hanger Insertion for Diverse Garments }, author={ Yuxing Chen and Songlin Wei and Bowen Xiao and Jiangran Lyu and Jiayi Chen and Feng Zhu and He Wang }, journal={arXiv preprint arXiv:2412.01083}, year={ 2025 } }