555

Understanding Representation Quality in Self-Supervised Models

AAAI Conference on Artificial Intelligence (AAAI), 2022
Abstract

Self-supervised learning has shown impressive results in downstream classification tasks. However, there is limited work in understanding their failure modes and interpreting their learned representations. In this paper, we study the representation space of six state-of-the-art self-supervised models including SimCLR, SwaV, MoCo, BYOL, DINO and SimSiam. Without the use of class label information, we discover highly activating features that correspond to unique physical attributes in images and exist mostly in correctly-classified representations. Using these features, we propose Self-Supervised Representation Quality Score (or Q-Score), a model-agnostic, unsupervised score that can reliably predict if a given sample is likely to be mis-classified during linear evaluation, achieving AUPRC of 91.45 on ImageNet-100 and 78.78 on ImageNet-1K. Q-Score can also be used as a regularization term on any self-supervised model to remedy low-quality representations through the course of pre-training. We show that pre-training with Q-Score regularization can boost the performance of six state-of-the-art self-supervised models on ImageNet-1K, ImageNet-100, CIFAR-10, CIFAR-100 and STL-10, showing an average relative increase of 1.8% top-1 accuracy on linear evaluation. On ImageNet-100, BYOL shows 7.2% relative improvement and on ImageNet-1K, SimCLR shows 4.7% relative improvement compared to their baselines. Finally, using gradient heatmaps and Salient ImageNet masks, we define a metric to quantify the interpretability of each representation. We show that highly activating features are strongly correlated to core attributes and enhancing these features through Q-score regularization improves the overall representation interpretability for all self-supervised models.

View on arXiv
Comments on this paper