Learning Intrinsic Sparse Structures within Long Short-term Memory
- MQ
Model compression is significant for wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and in business clusters requiring quick responses to large-scale service requests. In this work, we focus on reducing the sizes of basic structures (including input updates, gates, hidden states, cell states and outputs) within Long Short-term Memory (LSTM) units, so as to learn homogeneously-sparse LSTMs. Independently reducing the sizes of those basic structures can result in unmatched dimensions among them, and consequently, end up with illegal LSTM units. To overcome this, we propose Intrinsic Sparse Structures (ISS) in LSTMs. By reducing one component of ISS, the sizes of basic structures are simultaneously reduced by one such that the consistency of dimensions is maintained. By learning ISS within LSTM units, the eventual LSTMs are still regular LSTMs but have much smaller sizes of basic structures. Our method is successfully evaluated by state-of-the-art LSTMs in applications of language modeling (of Penn TreeBank dataset) and machine Question Answering (of SQuAD dataset). Our source code is public available.
View on arXiv