ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.06967
  4. Cited By
Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning
  Framework for Dialogue

Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue

10 February 2024
Jian Wang
Chak Tou Leong
Jiashuo Wang
Dongding Lin
Wenjie Li
Xiao-Yong Wei
ArXivPDFHTML

Papers citing "Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue"

4 / 4 papers shown
Title
DeepThink: Aligning Language Models with Domain-Specific User Intents
DeepThink: Aligning Language Models with Domain-Specific User Intents
Yang Li
Mingxuan Luo
Yeyun Gong
Chen Lin
Jian Jiao
Yi Liu
Kaili Huang
LRM
ALM
ELM
52
0
0
08 Feb 2025
Prompting and Evaluating Large Language Models for Proactive Dialogues:
  Clarification, Target-guided, and Non-collaboration
Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration
Yang Deng
Lizi Liao
Liang Chen
Hongru Wang
Wenqiang Lei
Tat-Seng Chua
84
72
0
23 May 2023
Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking
  Consistency for Task-oriented Dialogue System
Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System
Libo Qin
Tianbao Xie
Shijue Huang
Qiguang Chen
Xiao Xu
Wanxiang Che
42
20
0
23 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
1