MULE: Multi-terrain and Unknown Load Adaptation for Effective Quadrupedal Locomotion

Quadrupedal robots are increasingly deployed for load-carrying tasks across diverse terrains. While Model Predictive Control (MPC)-based methods can account for payload variations, they often depend on predefined gait schedules or trajectory generators, limiting their adaptability in unstructured environments. To address these limitations, we propose an Adaptive Reinforcement Learning (RL) framework that enables quadrupedal robots to dynamically adapt to both varying payloads and diverse terrains. The framework consists of a nominal policy responsible for baseline locomotion and an adaptive policy that learns corrective actions to preserve stability and improve command tracking under payload variations. We validate the proposed approach through large-scale simulation experiments in Isaac Gym and real-world hardware deployment on a Unitree Go1 quadruped. The controller was tested on flat ground, slopes, and stairs under both static and dynamic payload changes. Across all settings, our adaptive controller consistently outperformed the controller in tracking body height and velocity commands, demonstrating enhanced robustness and adaptability without requiring explicit gait design or manual tuning.
View on arXiv@article{kurva2025_2505.00488, title={ MULE: Multi-terrain and Unknown Load Adaptation for Effective Quadrupedal Locomotion }, author={ Vamshi Kumar Kurva and Shishir Kolathaya }, journal={arXiv preprint arXiv:2505.00488}, year={ 2025 } }