127

Deep Memory Update

Abstract

Recurrent neural networks are key tools for sequential data processing. However, they are notorious for problems regarding their training. Challenges include capturing complex relations between consecutive states and stability and efficiency of training. In this paper, we introduce a recurrent neural architecture called Deep Memory Update (DMU), as it is based on updating the previous memory state with a deep transformation of the lagged state and the network input. The architecture is able to learn the transformation of its internal state using an arbitrary nonlinear function. Its training is stable and relatively fast due to the speed of training varying according to a layer depth. Even though DMU is based on simple components, experimental results presented here confirm that it can compete with and often outperform state-of-the-art architectures such as Long Short-Term Memory, Gated Recurrent Units, and Recurrent Highway Networks.

View on arXiv
Comments on this paper