247

MixMask: Revisiting Masked Siamese Self-supervised Learning in Asymmetric Distance

British Machine Vision Conference (BMVC), 2022
Abstract

Recent advances in self-supervised learning integrate Masked Modeling and Siamese Networks into a single framework to fully reap the advantages of both the two techniques. However, the previous erase-based masking scheme in masked image modeling is more aligned with the patchifying mechanism of ViT, it is not originally designed for siamese networks of ConvNet. Existing approaches simply inherit the default loss design from previous siamese networks and ignore the information loss after employing masking operation in the frameworks. In this paper, we propose a filling-based masking strategy called MixMask to prevent information loss due to the randomly erased areas of an image in the vanilla masking method. We further introduce a flexible loss function design that takes into account semantic distance change between two different mixed views for adapting the integrated architecture and avoiding mismatches between transformed input and objective in Masked Siamese ConvNets (MSCN). The flexible loss distance is calculated according to the proposed mix-masking scheme. Extensive experiments are conducted on various datasets of CIFAR-100, Tiny-ImageNet, and ImageNet-1K. The results demonstrate that the proposed framework can achieve better accuracy on linear probing, semi-supervised, and supervised finetuning, which outperforms the state-of-the-art MSCN by a significant margin. We also show the superiority on the downstream tasks of object detection and segmentation. Our source code is available at https://github.com/LightnessOfBeing/MixMask.

View on arXiv
Comments on this paper