Limitation Learning: Catching Adverse Dialog with GAIL
- OffRL
Main:5 Pages
3 Figures
Bibliography:2 Pages
4 Tables
Abstract
Imitation learning is a proven method for creating a policy in the absence of rewards, by leveraging expert demonstrations. In this work, we apply imitation learning to conversation. In doing so, we recover a policy capable of talking to a user given a prompt (input state), and a discriminator capable of classifying between expert and synthetic conversation. While our policy is effective, we recover results from our discriminator that indicate the limitations of dialog models. We argue that this technique can be used to identify adverse behavior of arbitrary data models common for dialog oriented tasks.
View on arXivComments on this paper
