Deep Transformers Thirst for Comprehensive-Frequency Data
- ViT
Current researches indicate that the introduction of inductive bias (IB) can improve the performance of Vision Transformer (ViT). However, they introduce a pyramid structure at the same time to counteract the incremental FLOPs and parameters caused by introducing IB. A structure like this destroys the unification of computer vision and natural language processing (NLP) and is also unsuitable for pixel-level tasks. We study an NLP model called LSRA, which introduces IB under a pyramid-free structure with fewer FLOPs and parameters. We analyze why it outperforms ViT, discovering that introducing IB increases the share of high-frequency data (HFD) in each layer, giving 'attention' more comprehensive information. As a result, the head notices more diverse information, showing increased diversity of the head-attention distances (Head Diversity). However, the way LSRA introduced IB is inefficient. To further improve the HFD share, increase the Head Diversity, and explore the potential of transformers, we propose EIT. EIT Efficiently introduces IB to ViT with a novel decreasing convolutional structure and a pyramid-free structure. In four small-scale datasets, EIT has an accuracy improvement of 13% on average with fewer parameters and FLOPs than ViT. In the ImageNet-1K, EIT achieves 70%, 78%, 81% and 82% Top-1 accuracy with 3.5M, 8.9M, 16M and 25M parameters, respectively, which are competitive with the representative state-of-the-art (SOTA) methods. In particular, EIT achieves SOTA performance over other models which have a pyramid-free structure. Finally, ablation studies show that EIT does not require position embedding, which offers the possibility of simplified adaptation to more visual tasks without the need to redesign embedding.
View on arXiv