ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.19007
  4. Cited By
Behavior Alignment via Reward Function Optimization

Behavior Alignment via Reward Function Optimization

29 October 2023
Dhawal Gupta
Yash Chandak
Scott M. Jordan
Philip S. Thomas
Bruno Castro da Silva
ArXivPDFHTML

Papers citing "Behavior Alignment via Reward Function Optimization"

4 / 4 papers shown
Title
ELEMENTAL: Interactive Learning from Demonstrations and Vision-Language Models for Reward Design in Robotics
ELEMENTAL: Interactive Learning from Demonstrations and Vision-Language Models for Reward Design in Robotics
Letian Chen
Nina Moorman
Matthew C. Gombolay
OffRL
LM&Ro
98
0
0
27 Nov 2024
Highly Efficient Self-Adaptive Reward Shaping for Reinforcement Learning
Highly Efficient Self-Adaptive Reward Shaping for Reinforcement Learning
Haozhe Ma
Zhengding Luo
Thanh Vinh Vo
Kuankuan Sima
Tze-Yun Leong
34
6
0
06 Aug 2024
Bilevel reinforcement learning via the development of hyper-gradient without lower-level convexity
Bilevel reinforcement learning via the development of hyper-gradient without lower-level convexity
Yan Yang
Bin Gao
Ya-xiang Yuan
46
2
0
30 May 2024
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
362
11,684
0
09 Mar 2017
1