291

The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction

International Conference on 3D Vision (3DV), 2021
Abstract

Continual learning has been extensively studied for classification tasks with methods developed to primarily avoid catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we present a set of continual 3D object shape reconstruction tasks including complete 3D shape reconstruction from different input modalities and visible surface (2.5D) reconstruction which surprisingly demonstrates positive knowledge (backward and forward) transfer when training with solely vanilla SGD and without additional heuristics. We provide evidence that continuously updated representation learning of single-view 3D shape reconstruction improves the performance on learned and novel categories over time. We provide a novel analysis of knowledge transfer ability by looking at the output distribution shift across sequential learning tasks. Finally, we show that the robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. The codebase, dataset, and pre-trained models released with this article can be found at https://github.com/rehg-lab/CLRec.

View on arXiv
Comments on this paper