302

Learning Simpler Language Models with the Differential State Framework

Abstract

Learning useful information across long time lags is a critical and difficult problem for temporal neural models in tasks like language modeling. Existing architectures that address the issue are often complex and costly to train. The Differential State Framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models. DSF models maintain longer-term memory by learning to interpolate between a fast-changing data-driven representation and a slowly changing, implicitly stable state. This requires hardly any more parameters than a classical simple recurrent network. Within the framework, a new architecture is presented, the Delta-RNN. In language modeling at the word and character levels, the Delta-RNN outperforms popular complex architectures, such as the Long Short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), and, when regularized, perform comparably to several state-of-the-art baselines. At the subword level, Delta-RNN's performance is comparable to that of complex gated architectures.

View on arXiv
Comments on this paper