ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16084
10
0

Motion Priors Reimagined: Adapting Flat-Terrain Skills for Complex Quadruped Mobility

21 May 2025
Zewei Zhang
Chenhao Li
Takahiro Miki
Marco Hutter
ArXivPDFHTML
Abstract

Reinforcement learning (RL)-based legged locomotion controllers often require meticulous reward tuning to track velocities or goal positions while preserving smooth motion on various terrains. Motion imitation methods via RL using demonstration data reduce reward engineering but fail to generalize to novel environments. We address this by proposing a hierarchical RL framework in which a low-level policy is first pre-trained to imitate animal motions on flat ground, thereby establishing motion priors. A subsequent high-level, goal-conditioned policy then builds on these priors, learning residual corrections that enable perceptive locomotion, local obstacle avoidance, and goal-directed navigation across diverse and rugged terrains. Simulation experiments illustrate the effectiveness of learned residuals in adapting to progressively challenging uneven terrains while still preserving the locomotion characteristics provided by the motion priors. Furthermore, our results demonstrate improvements in motion regularization over baseline models trained without motion priors under similar reward setups. Real-world experiments with an ANYmal-D quadruped robot confirm our policy's capability to generalize animal-like locomotion skills to complex terrains, demonstrating smooth and efficient locomotion and local navigation performance amidst challenging terrains with obstacles.

View on arXiv
@article{zhang2025_2505.16084,
  title={ Motion Priors Reimagined: Adapting Flat-Terrain Skills for Complex Quadruped Mobility },
  author={ Zewei Zhang and Chenhao Li and Takahiro Miki and Marco Hutter },
  journal={arXiv preprint arXiv:2505.16084},
  year={ 2025 }
}
Comments on this paper