Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.18870
Cited By
More RLHF, More Trust? On The Impact of Human Preference Alignment On Language Model Trustworthiness
29 April 2024
Aaron Jiaxun Li
Satyapriya Krishna
Himabindu Lakkaraju
Re-assign community
ArXiv
PDF
HTML
Papers citing
"More RLHF, More Trust? On The Impact of Human Preference Alignment On Language Model Trustworthiness"
2 / 2 papers shown
Title
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
280
1,595
0
18 Sep 2019
1