ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.20296
22
10

PersonalLLM: Tailoring LLMs to Individual Preferences

30 September 2024
Thomas P. Zollo
Andrew Siah
Naimeng Ye
Ang Li
Hongseok Namkoong
ArXivPDFHTML
Abstract

As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment benchmarks that implicitly assume uniform preferences, we curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences. Instead of persona-prompting LLMs based on high-level attributes (e.g., user's race or response length), which yields homogeneous preferences relative to humans, we develop a method that can simulate a large user base with diverse preferences from a set of pre-trained reward models. Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms that grapple with continual data sparsity--few relevant feedback from the particular user--by leveraging historical data from other (similar) users. We explore basic in-context learning and meta-learning baselines to illustrate the utility of PersonalLLM and highlight the need for future methodological development. Our dataset is available atthis https URL

View on arXiv
@article{zollo2025_2409.20296,
  title={ PersonalLLM: Tailoring LLMs to Individual Preferences },
  author={ Thomas P. Zollo and Andrew Wei Tung Siah and Naimeng Ye and Ang Li and Hongseok Namkoong },
  journal={arXiv preprint arXiv:2409.20296},
  year={ 2025 }
}
Comments on this paper