Does Continual Learning = Catastrophic Forgetting?
- CLL
Continual learning is known for suffering from catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we challenge the assumption that continual learning is inevitably associated with catastrophic forgetting by presenting a set of tasks that surprisingly do not suffer from catastrophic forgetting when learned continually. The robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. We further introduce a novel yet simple algorithm, YASS that achieves state-of-the-art performance in the class-incremental categorization learning task and provide an insight into the benefit of learning the representation continuously. Finally, we present converging evidence on the forgetting dynamics of representation learning in continual models. The codebase, dataset, and pre-trained models released with this article can be found at https://github.com/rehg-lab/CLRec.
View on arXiv