ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15131
30
0

Modeling the One-to-Many Property in Open-Domain Dialogue with LLMs

18 June 2025
Jing Yang Lee
Kong-Aik Lee
Woon-Seng Gan
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:3 Pages
4 Tables
Appendix:4 Pages
Abstract

Open-domain Dialogue (OD) exhibits a one-to-many (o2m) property, whereby multiple appropriate responses exist for a single dialogue context. Despite prior research showing that modeling this property boosts response diversity, most modern LLM-based dialogue agents do not explicitly do so. In this work, we model the o2m property of OD in LLMs by decomposing OD generation into two key tasks: Multi-Response Generation (MRG) and Preference-based Selection (PS), which entail generating a set of n semantically and lexically diverse high-quality responses for a given dialogue context, followed by selecting a single response based on human preference, respectively. To facilitate MRG and PS, we introduce o2mDial, a dialogue corpus explicitly designed to capture the o2m property by featuring multiple plausible responses for each context. Leveraging o2mDial, we propose new in-context learning and instruction-tuning strategies, as well as novel evaluation metrics for MRG, alongside a model-based approach for PS. Empirical results demonstrate that applying the proposed two-stage framework to smaller LLMs for OD generation enhances overall response diversity while maintaining contextual coherence, improving response quality by up to 90%, bringing them closer to the performance of larger models.

View on arXiv
@article{lee2025_2506.15131,
  title={ Modeling the One-to-Many Property in Open-Domain Dialogue with LLMs },
  author={ Jing Yang Lee and Kong-Aik Lee and Woon-Seng Gan },
  journal={arXiv preprint arXiv:2506.15131},
  year={ 2025 }
}
Comments on this paper