316

Structured Compression and Sharing of Representational Space for Continual Learning

IEEE Access (IEEE Access), 2020
Abstract

Humans learn adaptively and efficiently throughout their lives. However, incrementally learning tasks causes artificial neural networks to overwrite relevant information learned about older tasks, resulting in 'Catastrophic Forgetting'. Efforts to overcome this phenomenon utilize resources poorly, for instance, by needing to save older data or parametric importance scores or by growing the network architecture. We propose an algorithm that enables a network to learn continually and efficiently by partitioning the learnt space into a Core space, that serves as the condensed knowledge base over previously learned tasks, and a Residual space, which is akin to a scratch space for learning the current task. After learning each task, the Residual is analyzed for redundancy, both within itself and with the learnt Core space, and a minimal set of dimensions is added to the Core space. The remaining Residual is freed up for learning the next task. We evaluate our algorithm on P-MNIST, CIFAR-10 and CIFAR-100 datasets and achieve comparable accuracy to the state-of-the-art methods while overcoming the problem of catastrophic forgetting. Additionally, our algorithm is suited for practical use due to the structured nature of the resulting architecture, which gives us up to 5x improvement in energy efficiency during inference over the current state-of-the-art.

View on arXiv
Comments on this paper