Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.13038
Cited By
Mapping Social Choice Theory to RLHF
19 April 2024
Jessica Dai
Eve Fleisig
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mapping Social Choice Theory to RLHF"
7 / 7 papers shown
Title
Clone-Robust AI Alignment
Ariel D. Procaccia
Benjamin G. Schiffer
Shirley Zhang
24
1
0
17 Jan 2025
Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment
Peter Vamplew
Conor F. Hayes
Cameron Foale
Richard Dazeley
Hadassah Harland
24
0
0
15 Oct 2024
Improving Context-Aware Preference Modeling for Language Models
Silviu Pitis
Ziang Xiao
Nicolas Le Roux
Alessandro Sordoni
24
8
0
20 Jul 2024
Axioms for AI Alignment from Human Feedback
Luise Ge
Daniel Halpern
Evi Micha
Ariel D. Procaccia
Itai Shapira
Yevgeniy Vorobeychik
Junlin Wu
21
15
0
23 May 2024
Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
Vincent Conitzer
Rachel Freedman
J. Heitzig
Wesley H. Holliday
Bob M. Jacobs
...
Eric Pacuit
Stuart Russell
Hailey Schoelkopf
Emanuel Tewolde
W. Zwicker
31
28
0
16 Apr 2024
Incorporating Worker Perspectives into MTurk Annotation Practices for NLP
Olivia Huang
Eve Fleisig
Dan Klein
38
15
0
06 Nov 2023
Learning Reward Functions from Scale Feedback
Nils Wilde
Erdem Biyik
Dorsa Sadigh
Stephen L. Smith
39
32
0
01 Oct 2021
1