ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19158
70
1

When Personalization Meets Reality: A Multi-Faceted Analysis of Personalized Preference Learning

26 February 2025
Yijiang River Dong
Tiancheng Hu
Yinhong Liu
Ahmet Üstün
Nigel Collier
ArXivPDFHTML
Abstract

While Reinforcement Learning from Human Feedback (RLHF) is widely used to align Large Language Models (LLMs) with human preferences, it typically assumes homogeneous preferences across users, overlooking diverse human values and minority viewpoints. Although personalized preference learning addresses this by tailoring separate preferences for individual users, the field lacks standardized methods to assess its effectiveness. We present a multi-faceted evaluation framework that measures not only performance but also fairness, unintended effects, and adaptability across varying levels of preference divergence. Through extensive experiments comparing eight personalization methods across three preference datasets, we demonstrate that performance differences between methods could reach 36% when users strongly disagree, and personalization can introduce up to 20% safety misalignment. These findings highlight the critical need for holistic evaluation approaches to advance the development of more effective and inclusive preference learning systems.

View on arXiv
@article{dong2025_2502.19158,
  title={ When Personalization Meets Reality: A Multi-Faceted Analysis of Personalized Preference Learning },
  author={ Yijiang River Dong and Tiancheng Hu and Yinhong Liu and Ahmet Üstün and Nigel Collier },
  journal={arXiv preprint arXiv:2502.19158},
  year={ 2025 }
}
Comments on this paper