29
3

A Survey of the Self Supervised Learning Mechanisms for Vision Transformers

Abstract

Deep supervised learning models require high volume of labeled data to attain sufficiently good results. Although, the practice of gathering and annotating such big data is costly and laborious. Recently, the application of self supervised learning (SSL) in vision tasks has gained significant attention. The intuition behind SSL is to exploit the synchronous relationships within the data as a form of self-supervision, which can be versatile. In the current big data era, most of the data is unlabeled, and the success of SSL thus relies in finding ways to utilize this vast amount of unlabeled data available. Thus it is better for deep learning algorithms to reduce reliance on human supervision and instead focus on self-supervision based on the inherent relationships within the data. With the advent of ViTs, which have achieved remarkable results in computer vision, it is crucial to explore and understand the various SSL mechanisms employed for training these models specifically in scenarios where there is limited labelled data available. In this survey, we develop a comprehensive taxonomy of systematically classifying the SSL techniques based upon their representations and pre-training tasks being applied. Additionally, we discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field. Furthermore, we present a comparative analysis of different SSL methods, evaluate their strengths and limitations, and identify potential avenues for future research.

View on arXiv
@article{khan2025_2408.17059,
  title={ A Survey of the Self Supervised Learning Mechanisms for Vision Transformers },
  author={ Asifullah Khan and Anabia Sohail and Mustansar Fiaz and Mehdi Hassan and Tariq Habib Afridi and Sibghat Ullah Marwat and Farzeen Munir and Safdar Ali and Hannan Naseem and Muhammad Zaigham Zaheer and Kamran Ali and Tangina Sultana and Ziaurrehman Tanoli and Naeem Akhter },
  journal={arXiv preprint arXiv:2408.17059},
  year={ 2025 }
}
Comments on this paper