ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.14382
96
16

UFO-ViT: High Performance Linear Vision Transformer without Softmax

29 September 2021
Jeonggeun Song
    ViT
ArXivPDFHTML
Abstract

Vision transformers have become one of the most important models for computer vision tasks. Although they outperform prior works, they require heavy computational resources on a scale that is quadratic to NNN. This is a major drawback of the traditional self-attention (SA) algorithm. Here, we propose the Unit Force Operated Vision Transformer (UFO-ViT), a novel SA mechanism that has linear complexity. The main approach of this work is to eliminate nonlinearity from the original SA. We factorize the matrix multiplication of the SA mechanism without complicated linear approximation. By modifying only a few lines of code from the original SA, the proposed models outperform most transformer-based models on image classification and dense prediction tasks on most capacity regimes.

View on arXiv
Comments on this paper