Learn to Teach: Sample-Efficient Privileged Learning for Humanoid Locomotion over Diverse Terrains

Humanoid robots promise transformative capabilities for industrial and service applications. While recent advances in Reinforcement Learning (RL) yield impressive results in locomotion, manipulation, and navigation, the proposed methods typically require enormous simulation samples to account for real-world variability. This work proposes a novel one-stage training framework-Learn to Teach (L2T)-which unifies teacher and student policy learning. Our approach recycles simulator samples and synchronizes the learning trajectories through shared dynamics, significantly reducing sample complexities and training time while achieving state-of-the-art performance. Furthermore, we validate the RL variant (L2T-RL) through extensive simulations and hardware tests on the Digit robot, demonstrating zero-shot sim-to-real transfer and robust performance over 12+ challenging terrains without depth estimation modules.
View on arXiv@article{wu2025_2402.06783, title={ Learn to Teach: Sample-Efficient Privileged Learning for Humanoid Locomotion over Diverse Terrains }, author={ Feiyang Wu and Xavier Nal and Jaehwi Jang and Wei Zhu and Zhaoyuan Gu and Anqi Wu and Ye Zhao }, journal={arXiv preprint arXiv:2402.06783}, year={ 2025 } }