ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.09372
25
3

Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers

18 August 2023
Tobias Christian Nauen
Sebastián M. Palacio
Federico Raue
Andreas Dengel
ArXivPDFHTML
Abstract

Self-attention in Transformers comes with a high computational cost because of their quadratic computational complexity, but their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we perform a large-scale benchmark of more than 45 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and memory usage. Our benchmark provides a standardized baseline for efficiency-oriented transformers. We analyze the results based on the Pareto front -- the boundary of optimal models. Surprisingly, despite claims of other models being more efficient, ViT remains Pareto optimal across multiple metrics. We observe that hybrid attention-CNN models exhibit remarkable inference memory- and parameter-efficiency. Moreover, our benchmark shows that using a larger model in general is more efficient than using higher resolution images. Thanks to our holistic evaluation, we provide a centralized resource for practitioners and researchers, facilitating informed decisions when selecting or developing efficient transformers.

View on arXiv
@article{nauen2025_2308.09372,
  title={ Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers },
  author={ Tobias Christian Nauen and Sebastian Palacio and Federico Raue and Andreas Dengel },
  journal={arXiv preprint arXiv:2308.09372},
  year={ 2025 }
}
Comments on this paper