ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.19882
  4. Cited By
Enhancing Efficiency in Vision Transformer Networks: Design Techniques
  and Insights

Enhancing Efficiency in Vision Transformer Networks: Design Techniques and Insights

28 March 2024
Moein Heidari
Reza Azad
Sina Ghorbani Kolahi
René Arimond
Leon Niggemeier
Alaa Sulaiman
Afshin Bozorgpour
Ehsan Khodapanah Aghdam
A. Kazerouni
I. Hacihaliloglu
Dorit Merhof
ArXivPDFHTML

Papers citing "Enhancing Efficiency in Vision Transformer Networks: Design Techniques and Insights"

10 / 10 papers shown
Title
Echo-E$^3$Net: Efficient Endo-Epi Spatio-Temporal Network for Ejection Fraction Estimation
Echo-E3^33Net: Efficient Endo-Epi Spatio-Temporal Network for Ejection Fraction Estimation
Moein Heidari
Afshin Bozorgpour
AmirHossein Zarif-Fakharnia
Dorit Merhof
I. Hacihaliloglu
38
0
0
21 Mar 2025
SL$^{2}$A-INR: Single-Layer Learnable Activation for Implicit Neural Representation
SL2^{2}2A-INR: Single-Layer Learnable Activation for Implicit Neural Representation
Moein Heidari
Reza Rezaeian
Reza Azad
Dorit Merhof
Hamid Soltanian-Zadeh
I. Hacihaliloglu
33
1
0
17 Sep 2024
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient
  Vision Transformers
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers
Cong Wei
Brendan Duke
R. Jiang
P. Aarabi
Graham W. Taylor
Florian Shkurti
ViT
37
13
0
24 Mar 2023
BiFormer: Vision Transformer with Bi-Level Routing Attention
BiFormer: Vision Transformer with Bi-Level Routing Attention
Lei Zhu
Xinjiang Wang
Zhanghan Ke
Wayne Zhang
Rynson W. H. Lau
115
438
0
15 Mar 2023
Spikformer: When Spiking Neural Network Meets Transformer
Spikformer: When Spiking Neural Network Meets Transformer
Zhaokun Zhou
Yuesheng Zhu
Chao He
Yaowei Wang
Shuicheng Yan
Yonghong Tian
Liuliang Yuan
140
231
0
29 Sep 2022
Hydra Attention: Efficient Attention with Many Heads
Hydra Attention: Efficient Attention with Many Heads
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Judy Hoffman
93
75
0
15 Sep 2022
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
  Transformer
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
184
1,148
0
05 Oct 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
F. Khan
M. Shah
ViT
219
2,404
0
04 Jan 2021
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
214
7,687
0
17 Aug 2015
1