369

Self-Training for Class-Incremental Semantic Segmentation

IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Abstract

We study incremental learning for semantic segmentation where when learning new classes we have no access to the labeled data of previous tasks. When incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previous learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for rehearsal of previous knowledge. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and new models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. The experiments demonstrate state-of-the-art results: obtaining a relative gain of up to 114% on Pascal-VOC 2012 and 8.5% on the more challenging ADE20K compared to previous state-of-the-art methods.

View on arXiv
Comments on this paper