Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.14010
Cited By
Argumentative Reward Learning: Reasoning About Human Preferences
28 September 2022
Francis Rhys Ward
Francesco Belardinelli
Francesca Toni
HAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Argumentative Reward Learning: Reasoning About Human Preferences"
2 / 2 papers shown
Title
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
298
8,441
0
04 Mar 2022
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
267
1,151
0
18 Sep 2019
1