10
0

RAIDEN-R1: Improving Role-awareness of LLMs via GRPO with Verifiable Reward

Abstract

Role-playing conversational agents (RPCAs) face persistent challenges in maintaining role consistency. To address this, we propose RAIDEN-R1, a novel reinforcement learning framework that integrates Verifiable Role-Awareness Reward (VRAR). The method introduces both singular and multi-term mining strategies to generate quantifiable rewards by assessing role-specific keys. Additionally, we construct a high-quality, role-aware Chain-of-Thought dataset through multi-LLM collaboration, and implement experiments to enhance reasoning coherence. Experiments on the RAIDEN benchmark demonstrate RAIDEN-R1's superiority: our 14B-GRPO model achieves 88.04% and 88.65% accuracy on Script-Based Knowledge and Conversation Memory metrics, respectively, outperforming baseline models while maintaining robustness. Case analyses further reveal the model's enhanced ability to resolve conflicting contextual cues and sustain first-person narrative consistency. This work bridges the non-quantifiability gap in RPCA training and provides insights into role-aware reasoning patterns, advancing the development of RPCAs.

View on arXiv
@article{wang2025_2505.10218,
  title={ RAIDEN-R1: Improving Role-awareness of LLMs via GRPO with Verifiable Reward },
  author={ Zongsheng Wang and Kaili Sun and Bowen Wu and Qun Yu and Ying Li and Baoxun Wang },
  journal={arXiv preprint arXiv:2505.10218},
  year={ 2025 }
}
Comments on this paper