Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.12036
Cited By
A General Theoretical Paradigm to Understand Learning from Human Preferences
18 October 2023
M. G. Azar
Mark Rowland
Bilal Piot
Daniel Guo
Daniele Calandriello
Michal Valko
Rémi Munos
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A General Theoretical Paradigm to Understand Learning from Human Preferences"
15 / 415 papers shown
Title
Improving In-context Learning via Bidirectional Alignment
Chengwei Qin
Wenhan Xia
Fangkai Jiao
Chen Chen
Yuchen Hu
Bosheng Ding
Shafiq R. Joty
35
7
0
28 Dec 2023
Preference as Reward, Maximum Preference Optimization with Importance Sampling
Zaifan Jiang
Xing Huang
Chao Wei
22
2
0
27 Dec 2023
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint
Wei Xiong
Hanze Dong
Chen Ye
Ziqi Wang
Han Zhong
Heng Ji
Nan Jiang
Tong Zhang
OffRL
38
156
0
18 Dec 2023
Policy Optimization in RLHF: The Impact of Out-of-preference Data
Ziniu Li
Tian Xu
Yang Yu
26
29
0
17 Dec 2023
On Diversified Preferences of Large Language Model Alignment
Dun Zeng
Yong Dai
Pengyu Cheng
Longyue Wang
Tianhao Hu
Wanshun Chen
Nan Du
Zenglin Xu
ALM
30
16
0
12 Dec 2023
RLHF and IIA: Perverse Incentives
Wanqiao Xu
Shi Dong
Xiuyuan Lu
Grace Lam
Zheng Wen
Benjamin Van Roy
29
2
0
02 Dec 2023
Nash Learning from Human Feedback
Rémi Munos
Michal Valko
Daniele Calandriello
M. G. Azar
Mark Rowland
...
Nikola Momchev
Olivier Bachem
D. Mankowitz
Doina Precup
Bilal Piot
28
123
0
01 Dec 2023
A density estimation perspective on learning from pairwise human preferences
Vincent Dumoulin
Daniel D. Johnson
Pablo Samuel Castro
Hugo Larochelle
Yann Dauphin
29
12
0
23 Nov 2023
Adversarial Preference Optimization: Enhancing Your Alignment via RM-LLM Game
Pengyu Cheng
Yifan Yang
Jian Li
Yong Dai
Tianhao Hu
Peixin Cao
Nan Du
Xiaolong Li
26
28
0
14 Nov 2023
Controlled Decoding from Language Models
Sidharth Mudgal
Jong Lee
H. Ganapathy
Yaguang Li
Tao Wang
...
Michael Collins
Trevor Strohman
Jilin Chen
Alex Beutel
Ahmad Beirami
32
69
0
25 Oct 2023
Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages
Shih-Cheng Huang
Pin-Zu Li
Yu-Chi Hsu
Kuang-Ming Chen
Yu Tung Lin
Shih-Kai Hsiao
Richard Tzong-Han Tsai
Hung-yi Lee
MoMe
34
13
0
07 Oct 2023
Mitigating the Alignment Tax of RLHF
Yong Lin
Hangyu Lin
Wei Xiong
Shizhe Diao
Zeming Zheng
...
Han Zhao
Nan Jiang
Heng Ji
Yuan Yao
Tong Zhang
MoMe
CLL
29
63
0
12 Sep 2023
Let Me Teach You: Pedagogical Foundations of Feedback for Language Models
Beatriz Borges
Niket Tandon
Tanja Kaser
Antoine Bosselut
22
3
0
01 Jul 2023
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation
Xiaoyu Chen
Han Zhong
Zhuoran Yang
Zhaoran Wang
Liwei Wang
118
60
0
23 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,915
0
04 Mar 2022
Previous
1
2
3
4
5
6
7
8
9