34
4

Self-Supervised Siamese Autoencoders

Abstract

In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.

View on arXiv
@article{baier2025_2304.02549,
  title={ Self-Supervised Siamese Autoencoders },
  author={ Friederike Baier and Sebastian Mair and Samuel G. Fadel },
  journal={arXiv preprint arXiv:2304.02549},
  year={ 2025 }
}
Comments on this paper