ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.09757
12
2

Benchmarking the Full-Order Model Optimization Based Imitation in the Humanoid Robot Reinforcement Learning Walk

15 December 2023
Ekaterina Chaikovskaya
Inna Minashina
Vladimir Litvinenko
Egor Davydenko
Dmitry Makarov
Yulia Danik
R. Gorbachev
ArXivPDFHTML
Abstract

When a gait of a bipedal robot is developed using deep reinforcement learning, reference trajectories may or may not be used. Each approach has its advantages and disadvantages, and the choice of method is up to the control developer. This paper investigates the effect of reference trajectories on locomotion learning and the resulting gaits. We implemented three gaits of a full-order anthropomorphic robot model with different reward imitation ratios, provided sim-to-sim control policy transfer, and compared the gaits in terms of robustness and energy efficiency. In addition, we conducted a qualitative analysis of the gaits by interviewing people, since our task was to create an appealing and natural gait for a humanoid robot. According to the results of the experiments, the most successful approach was the one in which the average value of rewards for imitation and adherence to command velocity per episode remained balanced throughout the training. The gait obtained with this method retains naturalness (median of 3.6 according to the user study) compared to the gait trained with imitation only (median of 4.0), while remaining robust close to the gait trained without reference trajectories.

View on arXiv
Comments on this paper