Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.09764
Cited By
Aligning Crowd Feedback via Distributional Preference Reward Modeling
15 February 2024
Dexun Li
Cong Zhang
Kuicai Dong
Derrick-Goh-Xin Deik
Ruiming Tang
Yong Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Aligning Crowd Feedback via Distributional Preference Reward Modeling"
7 / 7 papers shown
Title
LIVS: A Pluralistic Alignment Dataset for Inclusive Public Spaces
Rashid Mushkani
Shravan Nayak
Hugo Berard
Allison Cohen
Shin Koseki
Hadrien Bertrand
54
2
0
27 Feb 2025
Disentangling Preference Representation and Text Generation for Efficient Individual Preference Alignment
Jianfei Zhang
Jun Bai
B. Li
Yanmeng Wang
Rumei Li
Chenghua Lin
Wenge Rong
41
0
0
31 Dec 2024
Alignment of Diffusion Models: Fundamentals, Challenges, and Future
Buhua Liu
Shitong Shao
Bao Li
Lichen Bai
Zhiqiang Xu
Haoyi Xiong
James Kwok
Sumi Helal
Zeke Xie
39
11
0
11 Sep 2024
Aligning to Thousands of Preferences via System Message Generalization
Seongyun Lee
Sue Hyun Park
Seungone Kim
Minjoon Seo
ALM
36
36
0
28 May 2024
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
Deep Ganguli
Liane Lovitt
John Kernion
Amanda Askell
Yuntao Bai
...
Nicholas Joseph
Sam McCandlish
C. Olah
Jared Kaplan
Jack Clark
225
443
0
23 Aug 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,915
0
04 Mar 2022
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
277
1,587
0
18 Sep 2019
1