44
1

KwaiChat: A Large-Scale Video-Driven Multilingual Mixed-Type Dialogue Corpus

Abstract

Video-based dialogue systems, such as education assistants, have compelling application value, thereby garnering growing interest. However, the current video-based dialogue systems are limited by their reliance on a single dialogue type, which hinders their versatility in practical applications across a range of scenarios, including question-answering, emotional dialog, etc. In this paper, we identify this challenge as how to generate video-driven multilingual mixed-type dialogues. To mitigate this challenge, we propose a novel task and create a human-to-human video-driven multilingual mixed-type dialogue corpus, termed KwaiChat, containing a total of 93,209 videos and 246,080 dialogues, across 4 dialogue types, 30 domains, 4 languages, and 13 topics. Additionally, we establish baseline models on KwaiChat. An extensive analysis of 7 distinct LLMs on KwaiChat reveals that GPT-4o achieves the best performance but still cannot perform well in this situation even with the help of in-context learning and fine-tuning, which indicates that the task is not trivial and needs further research.

View on arXiv
@article{shi2025_2503.06899,
  title={ KwaiChat: A Large-Scale Video-Driven Multilingual Mixed-Type Dialogue Corpus },
  author={ Xiaoming Shi and Zeming Liu and Yiming Lei and Chenkai Zhang and Haitao Leng and Chuan Wang and Qingjie Liu and Wanxiang Che and Shaoguo Liu and Size Li and Yunhong Wang },
  journal={arXiv preprint arXiv:2503.06899},
  year={ 2025 }
}
Comments on this paper