CASS: Cross Architectural Self-Supervision for Medical Image Analysis
- OOD
Recent advances in deep learning and computer vision have reduced many barriers to automated medical image analysis, allowing algorithms to process label-free images and improve performance. Specifically, Transformers provide a global perspective of the image, that Convolutional Neural Networks (CNNs) inherently lack. Here we present Cross Architectural - Self Supervision, a novel self-supervised learning approach that leverages Transformer and CNN simultaneously. Compared to the existing state of the art self-supervised learning approaches, we empirically showed that CASS trained CNNs, and Transformers across three diverse datasets gained an average of 8.5% with 100% labelled data, 7.3% with 10% labelled data, and 11.5% with 1% labelled data. Notably, one of the test datasets comprised of histopathology slides of an autoimmune disease, a condition with minimal data that has been underrepresented in medical imaging. In addition, our findings revealed that CASS is also more robust than the existing state of the art self-supervised methods. The code is open source and is available on GitHub.
View on arXiv