ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Chia-Yu Li
Daniel Ortega
Dirk Vath
Florian Lux
Lindsey Vanderlyn
Maximilian Schmidt
Michael Neumann
Moritz Volkel
Pavel Denisov
Sabrina Jenne
Zorica Kacarevic
Ngoc Thang Vu

Abstract
We present ADVISER - an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), socially-engaged (e.g. emotion recognition, engagement level prediction and backchanneling) conversational agents. The final Python-based implementation of our toolkit is flexible, easy to use, and easy to extend not only for technically experienced users, such as machine learning researchers, but also for less technically experienced users, such as linguists or cognitive scientists, thereby providing a flexible platform for collaborative research. Link to open-source code: https://github.com/DigitalPhonetics/adviser
View on arXivComments on this paper