MoE-Loco: Mixture of Experts for Multitask Locomotion
Abstract
We present MoE-Loco, a Mixture of Experts (MoE) framework for multitask locomotion for legged robots. Our method enables a single policy to handle diverse terrains, including bars, pits, stairs, slopes, and baffles, while supporting quadrupedal and bipedal gaits. Using MoE, we mitigate the gradient conflicts that typically arise in multitask reinforcement learning, improving both training efficiency and performance. Our experiments demonstrate that different experts naturally specialize in distinct locomotion behaviors, which can be leveraged for task migration and skill composition. We further validate our approach in both simulation and real-world deployment, showcasing its robustness and adaptability.
View on arXiv@article{huang2025_2503.08564, title={ MoE-Loco: Mixture of Experts for Multitask Locomotion }, author={ Runhan Huang and Shaoting Zhu and Yilun Du and Hang Zhao }, journal={arXiv preprint arXiv:2503.08564}, year={ 2025 } }
Comments on this paper