ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09833
26
0

PreCi: Pretraining and Continual Improvement of Humanoid Locomotion via Model-Assumption-Based Regularization

14 April 2025
Hyunyoung Jung
Zhaoyuan Gu
Ye Zhao
Hae-Won Park
Sehoon Ha
ArXivPDFHTML
Abstract

Humanoid locomotion is a challenging task due to its inherent complexity and high-dimensional dynamics, as well as the need to adapt to diverse and unpredictable environments. In this work, we introduce a novel learning framework for effectively training a humanoid locomotion policy that imitates the behavior of a model-based controller while extending its capabilities to handle more complex locomotion tasks, such as more challenging terrain and higher velocity commands. Our framework consists of three key components: pre-training through imitation of the model-based controller, fine-tuning via reinforcement learning, and model-assumption-based regularization (MAR) during fine-tuning. In particular, MAR aligns the policy with actions from the model-based controller only in states where the model assumption holds to prevent catastrophic forgetting. We evaluate the proposed framework through comprehensive simulation tests and hardware experiments on a full-size humanoid robot, Digit, demonstrating a forward speed of 1.5 m/s and robust locomotion across diverse terrains, including slippery, sloped, uneven, and sandy terrains.

View on arXiv
@article{jung2025_2504.09833,
  title={ PreCi: Pretraining and Continual Improvement of Humanoid Locomotion via Model-Assumption-Based Regularization },
  author={ Hyunyoung Jung and Zhaoyuan Gu and Ye Zhao and Hae-Won Park and Sehoon Ha },
  journal={arXiv preprint arXiv:2504.09833},
  year={ 2025 }
}
Comments on this paper