Encoder-decoder with Focus-mechanism for Sequence Labelling Based Spoken
Language Understanding
This paper investigates the framework of encoder-decoder with attention for sequence labelling based Spoken Language Understanding. We introduce BLSTM-LSTM as the encoder-decoder model to fully utilize the power of deep learning. In the sequence labelling task, the input and output sequences are aligned word by word, while the attention mechanism can't provide the exact alignment. To address the limitations of attention mechanism in the sequence labelling task, we propose a novel focus mechanism. Experiments on the standard ATIS dataset showed that BLSTM-LSTM with focus mechanism defined the new state-of-the-art by outperforming standard BLSTM and attention based encoder-decoder. Further experiments also showed that the proposed model is more robust to speech recognition errors.
View on arXiv