Collecting Interactive Multi-modal Datasets for Grounded Language Understanding
Shrestha Mohanty
Negar Arabzadeh
Milagro Teruel
Yuxuan Sun
Artem Zholus
Alexey Skrynnik
Mikhail Burtsev
Kavya Srinet
Aleksandr I. Panov
Arthur Szlam
Marc-Alexandre Côté
Julia Kiseleva

Abstract
Human intelligence can remarkably adapt quickly to new tasks and environments. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research which can enable similar capabilities in machines, we made the following contributions (1) formalized the collaborative embodied agent using natural language task; (2) developed a tool for extensive and scalable data collection; and (3) collected the first dataset for interactive grounded language understanding.
View on arXivComments on this paper