ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10892
  4. Cited By
DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled Hierarchical Reinforcement Learning

DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled Hierarchical Reinforcement Learning

3 January 2025
Utsav Singh
Souradip Chakraborty
Wesley A. Suttle
Brian M. Sadler
Vinay P. Namboodiri
Amrit Singh Bedi
    OffRL
ArXivPDFHTML

Papers citing "DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled Hierarchical Reinforcement Learning"

2 / 2 papers shown
Title
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Mao Ye
B. Liu
S. Wright
Peter Stone
Qian Liu
64
82
0
19 Sep 2022
Augmenting Reinforcement Learning with Behavior Primitives for Diverse
  Manipulation Tasks
Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks
Soroush Nasiriany
Huihan Liu
Yuke Zhu
75
107
0
07 Oct 2021
1