Learning to update Auto-associative Memory in Recurrent Neural Networks
for Improving Sequence Memorization
Learning to remember sequences, especially long sequences with recurrent neural networks is still a challenge. Register memory and attention mechanisms were both proposed to address it with either high computational cost to retain memory differentiability, or by discounting the RNN representation learning towards encoding shorter local contexts than encouraging long sequence encoding. Associative memory, which studies how multiple patterns can be compressed in a fixed size memory, were rarely studied in recent years. Although some recent work tries to introduce associative memory in RNN and mimic the energy decay process in Hopfield nets, it inherits the shortcoming of rule based memory updates and the memory capacity is limited. This paper proposes a method to learn the memory update rule jointly with task objective in the hope to improve memory capacity, so that long sequences could be remembered faster. On top of the contribution on memory update approach, we propose an architecture that uses multiple such associative memory for more complex input encoding. We observed some interesting facts when compared to other RNN architectures on some well-studied sequence learning tasks.
View on arXiv