How to Understand Masked Autoencoders
"Masked Autoencoders (MAE) Are Scalable Vision Learners" revolutionizes the self-supervised learning that not only achieves the state-of-the-art for image pretraining, but also is a milestone that bridged the gap between the visual and linguistic masked autoencoding (BERT-style) pretrainings. However, to our knowledge, to date there are no theoretical perspectives to explain the powerful expressivity of MAE. In this paper, we, for the first time, propose a unified theoretical framework that provides a mathematical understanding for MAE. Particularly, we explain the patch-based attention approaches of MAE using an integral kernel under a non-overlapping domain decomposition setting. To help the researchers to further grasp the main reasons of the great success of MAE, based on our framework, we contribute five questions and answer them by insights from operator theory with mathematical rigor.
View on arXiv