ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.01392
  4. Cited By
Benchmarks and Algorithms for Offline Preference-Based Reward Learning

Benchmarks and Algorithms for Offline Preference-Based Reward Learning

3 January 2023
Daniel Shin
Anca Dragan
Daniel S. Brown
    OffRL
ArXivPDFHTML

Papers citing "Benchmarks and Algorithms for Offline Preference-Based Reward Learning"

42 / 42 papers shown
Title
Reinforcement Learning from Multi-level and Episodic Human Feedback
Reinforcement Learning from Multi-level and Episodic Human Feedback
Muhammad Qasim Elahi
Somtochukwu Oguchienti
Maheed H. Ahmed
Mahsa Ghasemi
OffRL
44
0
0
20 Apr 2025
Adversarial Policy Optimization for Offline Preference-based Reinforcement Learning
Hyungkyu Kang
Min-hwan Oh
OffRL
45
0
0
07 Mar 2025
Distributionally Robust Reinforcement Learning with Human Feedback
Debmalya Mandal
Paulius Sasnauskas
Goran Radanović
39
1
0
01 Mar 2025
Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques
Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques
Natalia Zhang
X. Wang
Qiwen Cui
Runlong Zhou
Sham Kakade
Simon S. Du
OffRL
48
1
0
10 Jan 2025
Forward KL Regularized Preference Optimization for Aligning Diffusion
  Policies
Forward KL Regularized Preference Optimization for Aligning Diffusion Policies
Zhao Shan
Chenyou Fan
Shuang Qiu
Jiyuan Shi
Chenjia Bai
33
4
0
09 Sep 2024
Representation Alignment from Human Feedback for Cross-Embodiment Reward
  Learning from Mixed-Quality Demonstrations
Representation Alignment from Human Feedback for Cross-Embodiment Reward Learning from Mixed-Quality Demonstrations
Connor Mattson
Anurag Aribandi
Daniel S. Brown
33
0
0
10 Aug 2024
Listwise Reward Estimation for Offline Preference-based Reinforcement
  Learning
Listwise Reward Estimation for Offline Preference-based Reinforcement Learning
Heewoong Choi
Sangwon Jung
Hongjoon Ahn
Taesup Moon
OffRL
39
2
0
08 Aug 2024
Hindsight Preference Learning for Offline Preference-based Reinforcement
  Learning
Hindsight Preference Learning for Offline Preference-based Reinforcement Learning
Chen-Xiao Gao
Shengjun Fang
Chenjun Xiao
Yang Yu
Zongzhang Zhang
OffRL
30
0
0
05 Jul 2024
Preference Elicitation for Offline Reinforcement Learning
Preference Elicitation for Offline Reinforcement Learning
Alizée Pace
Bernhard Schölkopf
Gunnar Rätsch
Giorgia Ramponi
OffRL
61
1
0
26 Jun 2024
Order-Optimal Instance-Dependent Bounds for Offline Reinforcement
  Learning with Preference Feedback
Order-Optimal Instance-Dependent Bounds for Offline Reinforcement Learning with Preference Feedback
Zhirui Chen
Vincent Y. F. Tan
OffRL
36
0
0
18 Jun 2024
Preference Alignment with Flow Matching
Preference Alignment with Flow Matching
Minu Kim
Yongsik Lee
Sehyeok Kang
Jihwan Oh
Song Chong
Seyoung Yun
32
1
0
30 May 2024
Offline Regularised Reinforcement Learning for Large Language Models
  Alignment
Offline Regularised Reinforcement Learning for Large Language Models Alignment
Pierre Harvey Richemond
Yunhao Tang
Daniel Guo
Daniele Calandriello
M. G. Azar
...
Gil Shamir
Rishabh Joshi
Tianqi Liu
Rémi Munos
Bilal Piot
OffRL
40
21
0
29 May 2024
A Unified Linear Programming Framework for Offline Reward Learning from
  Human Demonstrations and Feedback
A Unified Linear Programming Framework for Offline Reward Learning from Human Demonstrations and Feedback
Kihyun Kim
Jiawei Zhang
Asuman Ozdaglar
P. Parrilo
OffRL
33
1
0
20 May 2024
The Role of Predictive Uncertainty and Diversity in Embodied AI and
  Robot Learning
The Role of Predictive Uncertainty and Diversity in Embodied AI and Robot Learning
Ransalu Senanayake
32
8
0
06 May 2024
Optimal Design for Human Feedback
Optimal Design for Human Feedback
Subhojyoti Mukherjee
Anusha Lalitha
Kousha Kalantari
Aniket Deshmukh
Ge Liu
Yifei Ma
B. Kveton
36
0
0
22 Apr 2024
Dataset Reset Policy Optimization for RLHF
Dataset Reset Policy Optimization for RLHF
Jonathan D. Chang
Wenhao Zhan
Owen Oertell
Kianté Brantley
Dipendra Kumar Misra
Jason D. Lee
Wen Sun
OffRL
22
21
0
12 Apr 2024
Reward Learning from Suboptimal Demonstrations with Applications in
  Surgical Electrocautery
Reward Learning from Suboptimal Demonstrations with Applications in Surgical Electrocautery
Zohre Karimi
Shing-Hei Ho
Bao Thach
Alan Kuntz
Daniel S. Brown
OffRL
27
7
0
10 Apr 2024
Regularized Conditional Diffusion Model for Multi-Task Preference
  Alignment
Regularized Conditional Diffusion Model for Multi-Task Preference Alignment
Xudong Yu
Chenjia Bai
Haoran He
Changhong Wang
Xuelong Li
32
6
0
07 Apr 2024
Human Alignment of Large Language Models through Online Preference
  Optimisation
Human Alignment of Large Language Models through Online Preference Optimisation
Daniele Calandriello
Daniel Guo
Rémi Munos
Mark Rowland
Yunhao Tang
...
Michal Valko
Tianqi Liu
Rishabh Joshi
Zeyu Zheng
Bilal Piot
44
60
0
13 Mar 2024
FARPLS: A Feature-Augmented Robot Trajectory Preference Labeling System
  to Assist Human Labelers' Preference Elicitation
FARPLS: A Feature-Augmented Robot Trajectory Preference Labeling System to Assist Human Labelers' Preference Elicitation
Hanfang Lyu
Yuanchen Bai
Xin Liang
Ujaan Das
Chuhan Shi
Leiliang Gong
Yingchi Li
Mingfei Sun
Ming Ge
Xiaojuan Ma
40
0
0
10 Mar 2024
Bayesian Constraint Inference from User Demonstrations Based on
  Margin-Respecting Preference Models
Bayesian Constraint Inference from User Demonstrations Based on Margin-Respecting Preference Models
Dimitris Papadimitriou
Daniel S. Brown
40
1
0
04 Mar 2024
Corruption Robust Offline Reinforcement Learning with Human Feedback
Corruption Robust Offline Reinforcement Learning with Human Feedback
Debmalya Mandal
Andi Nika
Parameswaran Kamalaruban
Adish Singla
Goran Radanović
OffRL
28
8
0
09 Feb 2024
Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement
  Learning with Diverse Human Feedback
Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback
Yifu Yuan
Jianye Hao
Yi-An Ma
Zibin Dong
Hebin Liang
Jinyi Liu
Zhixin Feng
Kai-Wen Zhao
Yan Zheng
OffRL
ALM
16
14
0
04 Feb 2024
Iterative Data Smoothing: Mitigating Reward Overfitting and
  Overoptimization in RLHF
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu
Michael I. Jordan
Jiantao Jiao
26
23
0
29 Jan 2024
WARM: On the Benefits of Weight Averaged Reward Models
WARM: On the Benefits of Weight Averaged Reward Models
Alexandre Ramé
Nino Vieillard
Léonard Hussenot
Robert Dadashi
Geoffrey Cideron
Olivier Bachem
Johan Ferret
102
93
0
22 Jan 2024
Iterative Preference Learning from Human Feedback: Bridging Theory and
  Practice for RLHF under KL-Constraint
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint
Wei Xiong
Hanze Dong
Chen Ye
Ziqi Wang
Han Zhong
Heng Ji
Nan Jiang
Tong Zhang
OffRL
36
155
0
18 Dec 2023
Mastering Stacking of Diverse Shapes with Large-Scale Iterative
  Reinforcement Learning on Real Robots
Mastering Stacking of Diverse Shapes with Large-Scale Iterative Reinforcement Learning on Real Robots
Thomas Lampe
A. Abdolmaleki
Sarah Bechtle
Sandy H. Huang
Jost Tobias Springenberg
...
Markus Wulfmeier
Jingwei Zhang
Francesco Nori
N. Heess
Martin Riedmiller
OffRL
27
9
0
18 Dec 2023
A density estimation perspective on learning from pairwise human
  preferences
A density estimation perspective on learning from pairwise human preferences
Vincent Dumoulin
Daniel D. Johnson
Pablo Samuel Castro
Hugo Larochelle
Yann Dauphin
29
12
0
23 Nov 2023
Differentially Private Reward Estimation with Preference Feedback
Differentially Private Reward Estimation with Preference Feedback
Sayak Ray Chowdhury
Xingyu Zhou
Nagarajan Natarajan
26
4
0
30 Oct 2023
Unsupervised Behavior Extraction via Random Intent Priors
Unsupervised Behavior Extraction via Random Intent Priors
Haotian Hu
Yiqin Yang
Jianing Ye
Ziqing Mai
Chongjie Zhang
OffRL
32
6
0
28 Oct 2023
What Matters to You? Towards Visual Representation Alignment for Robot
  Learning
What Matters to You? Towards Visual Representation Alignment for Robot Learning
Ran Tian
Chenfeng Xu
Masayoshi Tomizuka
Jitendra Malik
Andrea V. Bajcsy
19
9
0
11 Oct 2023
Provable Benefits of Policy Learning from Human Preferences in
  Contextual Bandit Problems
Provable Benefits of Policy Learning from Human Preferences in Contextual Bandit Problems
Xiang Ji
Huazheng Wang
Minshuo Chen
Tuo Zhao
Mengdi Wang
OffRL
21
6
0
24 Jul 2023
Provable Reward-Agnostic Preference-Based Reinforcement Learning
Provable Reward-Agnostic Preference-Based Reinforcement Learning
Wenhao Zhan
Masatoshi Uehara
Wen Sun
Jason D. Lee
19
7
0
29 May 2023
Query-Policy Misalignment in Preference-Based Reinforcement Learning
Query-Policy Misalignment in Preference-Based Reinforcement Learning
Xiao Hu
Jianxiong Li
Xianyuan Zhan
Qing-Shan Jia
Ya-Qin Zhang
11
8
0
27 May 2023
Provable Offline Preference-Based Reinforcement Learning
Provable Offline Preference-Based Reinforcement Learning
Wenhao Zhan
Masatoshi Uehara
Nathan Kallus
Jason D. Lee
Wen Sun
OffRL
32
12
0
24 May 2023
Principled Reinforcement Learning with Human Feedback from Pairwise or
  $K$-wise Comparisons
Principled Reinforcement Learning with Human Feedback from Pairwise or KKK-wise Comparisons
Banghua Zhu
Jiantao Jiao
Michael I. Jordan
OffRL
23
177
0
26 Jan 2023
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics
  Models
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models
Yi Liu
Gaurav Datta
Ellen R. Novoseller
Daniel S. Brown
18
20
0
11 Jan 2023
Autonomous Assessment of Demonstration Sufficiency via Bayesian Inverse
  Reinforcement Learning
Autonomous Assessment of Demonstration Sufficiency via Bayesian Inverse Reinforcement Learning
Tuan-Duong Trinh
Haoyu Chen
Daniel S. Brown
OffRL
23
7
0
28 Nov 2022
Causal Confusion and Reward Misidentification in Preference-Based Reward
  Learning
Causal Confusion and Reward Misidentification in Preference-Based Reward Learning
J. Tien
Jerry Zhi-Yang He
Zackory M. Erickson
Anca Dragan
Daniel S. Brown
CML
28
39
0
13 Apr 2022
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
329
1,949
0
04 May 2020
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
268
5,660
0
05 Dec 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
249
9,134
0
06 Jun 2015
1