6
11

Improving Robustness and Generality of NLP Models Using Disentangled Representations

Abstract

Supervised neural networks, which first map an input xx to a single representation zz, and then map zz to the output label yy, have achieved remarkable success in a wide range of natural language processing (NLP) tasks. Despite their success, neural models lack for both robustness and generality: small perturbations to inputs can result in absolutely different outputs; the performance of a model trained on one domain drops drastically when tested on another domain. In this paper, we present methods to improve robustness and generality of NLP models from the standpoint of disentangled representation learning. Instead of mapping xx to a single representation zz, the proposed strategy maps xx to a set of representations {z1,z2,...,zK}\{z_1,z_2,...,z_K\} while forcing them to be disentangled. These representations are then mapped to different logits lls, the ensemble of which is used to make the final prediction yy. We propose different methods to incorporate this idea into currently widely-used models, including adding an LL2 regularizer on zzs or adding Total Correlation (TC) under the framework of variational information bottleneck (VIB). We show that models trained with the proposed criteria provide better robustness and domain adaptation ability in a wide range of supervised learning tasks.

View on arXiv
Comments on this paper