ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05619
31
0

Population-Proportional Preference Learning from Human Feedback: An Axiomatic Approach

5 June 2025
Kihyun Kim
Jiawei Zhang
Asuman Ozdaglar
P. Parrilo
ArXiv (abs)PDFHTML
Main:3 Pages
2 Figures
4 Tables
Appendix:20 Pages
Abstract

Conventional preference learning methods often prioritize opinions held more widely when aggregating preferences from multiple evaluators. This may result in policies that are biased in favor of some types of opinions or groups. The objective of this paper is to develop a novel preference learning framework capable of aligning aggregate opinions and policies proportionally with the true population distribution of evaluator preferences. Our approach infers the feasible set of evaluator population distributions directly from pairwise comparison data. Using these estimates, the algorithm constructs a policy that satisfies foundational axioms from social choice theory, namely monotonicity and Pareto efficiency, as well as our newly-introduced axioms of population-proportional representation and population-bounded robustness. We propose a soft-max relaxation method that smoothly trade-offs population-proportional representation with the selection of the Condorcet winner (which beats all other options in pairwise comparisons). Finally, we validate the effectiveness and scalability of our approach through experiments on both tabular recommendation tasks and large-scale language model alignment.

View on arXiv
@article{kim2025_2506.05619,
  title={ Population-Proportional Preference Learning from Human Feedback: An Axiomatic Approach },
  author={ Kihyun Kim and Jiawei Zhang and Asuman Ozdaglar and Pablo A. Parrilo },
  journal={arXiv preprint arXiv:2506.05619},
  year={ 2025 }
}
Comments on this paper