ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03206
29
0

Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward

4 April 2025
Yanming Wan
Jiaxing Wu
Marwa Abdulhai
Lior Shani
Natasha Jaques
ArXivPDFHTML
Abstract

Effective conversational agents must be able to personalize their behavior to suit a user's preferences, personality, and attributes, whether they are assisting with writing tasks or operating in domains like education or healthcare. Current training methods like Reinforcement Learning from Human Feedback (RLHF) prioritize helpfulness and safety but fall short in fostering truly empathetic, adaptive, and personalized interactions. Traditional approaches to personalization often rely on extensive user history, limiting their effectiveness for new or context-limited users. To overcome these limitations, we propose to incorporate an intrinsic motivation to improve the conversational agents's model of the user as an additional reward alongside multi-turn RLHF. This reward mechanism encourages the agent to actively elicit user traits by optimizing conversations to increase the accuracy of its user model. Consequently, the policy agent can deliver more personalized interactions through obtaining more information about the user. We applied our method both education and fitness settings, where LLMs teach concepts or recommend personalized strategies based on users' hidden learning style or lifestyle attributes. Using LLM-simulated users, our approach outperformed a multi-turn RLHF baseline in revealing information about the users' preferences, and adapting to them.

View on arXiv
@article{wan2025_2504.03206,
  title={ Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward },
  author={ Yanming Wan and Jiaxing Wu and Marwa Abdulhai and Lior Shani and Natasha Jaques },
  journal={arXiv preprint arXiv:2504.03206},
  year={ 2025 }
}
Comments on this paper