ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12663
33
0

Persona-judge: Personalized Alignment of Large Language Models via Token-level Self-judgment

17 April 2025
Xiaotian Zhang
Ruizhe Chen
Yang Feng
Zuozhu Liu
ArXivPDFHTML
Abstract

Aligning language models with human preferences presents significant challenges, particularly in achieving personalization without incurring excessive computational costs. Existing methods rely on reward signals and additional annotated data, limiting their scalability and adaptability to diverse human values. To address these challenges, we introduce Persona-judge, a novel discriminative paradigm that enables training-free personalized alignment with unseen preferences. Instead of optimizing policy parameters through external reward feedback, Persona-judge leverages the intrinsic preference judgment capabilities of the model. Specifically, a draft model generates candidate tokens conditioned on a given preference, while a judge model, embodying another preference, cross-validates the predicted tokens whether to be accepted. Experimental results demonstrate that Persona-judge, using the inherent preference evaluation mechanisms of the model, offers a scalable and computationally efficient solution to personalized alignment, paving the way for more adaptive customized alignment.

View on arXiv
@article{zhang2025_2504.12663,
  title={ Persona-judge: Personalized Alignment of Large Language Models via Token-level Self-judgment },
  author={ Xiaotian Zhang and Ruizhe Chen and Yang Feng and Zuozhu Liu },
  journal={arXiv preprint arXiv:2504.12663},
  year={ 2025 }
}
Comments on this paper