ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.13038
  4. Cited By
Mapping Social Choice Theory to RLHF

Mapping Social Choice Theory to RLHF

19 April 2024
Jessica Dai
Eve Fleisig
ArXivPDFHTML

Papers citing "Mapping Social Choice Theory to RLHF"

7 / 7 papers shown
Title
Clone-Robust AI Alignment
Ariel D. Procaccia
Benjamin G. Schiffer
Shirley Zhang
24
1
0
17 Jan 2025
Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment
Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment
Peter Vamplew
Conor F. Hayes
Cameron Foale
Richard Dazeley
Hadassah Harland
24
0
0
15 Oct 2024
Improving Context-Aware Preference Modeling for Language Models
Improving Context-Aware Preference Modeling for Language Models
Silviu Pitis
Ziang Xiao
Nicolas Le Roux
Alessandro Sordoni
24
8
0
20 Jul 2024
Axioms for AI Alignment from Human Feedback
Axioms for AI Alignment from Human Feedback
Luise Ge
Daniel Halpern
Evi Micha
Ariel D. Procaccia
Itai Shapira
Yevgeniy Vorobeychik
Junlin Wu
21
15
0
23 May 2024
Social Choice Should Guide AI Alignment in Dealing with Diverse Human
  Feedback
Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
Vincent Conitzer
Rachel Freedman
J. Heitzig
Wesley H. Holliday
Bob M. Jacobs
...
Eric Pacuit
Stuart Russell
Hailey Schoelkopf
Emanuel Tewolde
W. Zwicker
31
28
0
16 Apr 2024
Incorporating Worker Perspectives into MTurk Annotation Practices for
  NLP
Incorporating Worker Perspectives into MTurk Annotation Practices for NLP
Olivia Huang
Eve Fleisig
Dan Klein
38
15
0
06 Nov 2023
Learning Reward Functions from Scale Feedback
Learning Reward Functions from Scale Feedback
Nils Wilde
Erdem Biyik
Dorsa Sadigh
Stephen L. Smith
39
32
0
01 Oct 2021
1