19
10

SoT: Delving Deeper into Classification Head for Transformer

Abstract

Transformer models are not only successful in natural language processing (NLP) but also demonstrate high potential in computer vision (CV). Despite great advance, most of works only focus on improvement of architectures but pay little attention to the classification head. For years transformer models base exclusively on classification token to construct the final classifier, without explicitly harnessing high-level word tokens. In this paper, we propose a novel transformer model called second-order transformer (SoT), exploiting simultaneously the classification token and word tokens for the classifier. Specifically, we empirically disclose that high-level word tokens contain rich information, which per se are very competent with the classifier and moreover, are complementary to the classification token. To effectively harness such rich information, we propose multi-headed global cross-covariance pooling with singular value power normalization, which shares similar philosophy and thus is compatible with the transformer block, better than commonly used pooling methods. Then, we study comprehensively how to explicitly combine word tokens with classification token for building the final classification head. For CV tasks, our SoT significantly improves state-of-the-art vision transformers on challenging benchmarks including ImageNet and ImageNet-A. For NLP tasks, through fine-tuning based on pretrained language transformers including GPT and BERT, our SoT greatly boosts the performance on widely used tasks such as CoLA and RTE. Code will be available at https://peihuali.org/SoT

View on arXiv
Comments on this paper