ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13979
15
0

RecoveryChaining: Learning Local Recovery Policies for Robust Manipulation

17 October 2024
Shivam Vats
Devesh K. Jha
Maxim Likhachev
Oliver Kroemer
Diego Romeres
    OffRL
ArXivPDFHTML
Abstract

Model-based planners and controllers are commonly used to solve complex manipulation problems as they can efficiently optimize diverse objectives and generalize to long horizon tasks. However, they often fail during deployment due to noisy actuation, partial observability and imperfect models. To enable a robot to recover from such failures, we propose to use hierarchical reinforcement learning to learn a recovery policy. The recovery policy is triggered when a failure is detected based on sensory observations and seeks to take the robot to a state from which it can complete the task using the nominal model-based controllers. Our approach, called RecoveryChaining, uses a hybrid action space, where the model-based controllers are provided as additional \emph{nominal} options which allows the recovery policy to decide how to recover, when to switch to a nominal controller and which controller to switch to even with \emph{sparse rewards}. We evaluate our approach in three multi-step manipulation tasks with sparse rewards, where it learns significantly more robust recovery policies than those learned by baselines. We successfully transfer recovery policies learned in simulation to a physical robot to demonstrate the feasibility of sim-to-real transfer with our method.

View on arXiv
@article{vats2025_2410.13979,
  title={ RecoveryChaining: Learning Local Recovery Policies for Robust Manipulation },
  author={ Shivam Vats and Devesh K. Jha and Maxim Likhachev and Oliver Kroemer and Diego Romeres },
  journal={arXiv preprint arXiv:2410.13979},
  year={ 2025 }
}
Comments on this paper