ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.07027
  4. Cited By
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
  Attention

EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention

11 May 2023
Xinyu Liu
Houwen Peng
Ningxin Zheng
Yuqing Yang
Han Hu
Yixuan Yuan
    ViT
ArXivPDFHTML

Papers citing "EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention"

25 / 25 papers shown
Title
ORXE: Orchestrating Experts for Dynamically Configurable Efficiency
ORXE: Orchestrating Experts for Dynamically Configurable Efficiency
Qingyuan Wang
Guoxin Wang
B. Cardiff
Deepu John
28
0
0
07 May 2025
Image Recognition with Online Lightweight Vision Transformer: A Survey
Image Recognition with Online Lightweight Vision Transformer: A Survey
Zherui Zhang
Rongtao Xu
Jie Zhou
Changwei Wang
Xingtian Pei
...
Jiguang Zhang
Li Guo
Longxiang Gao
W. Xu
Shibiao Xu
ViT
48
0
0
06 May 2025
ASHiTA: Automatic Scene-grounded HIerarchical Task Analysis
ASHiTA: Automatic Scene-grounded HIerarchical Task Analysis
Yun Chang
Leonor Fermoselle
Duy Ta
Bernadette Bucher
Luca Carlone
Jiuguang Wang
30
0
0
09 Apr 2025
Enhancing Layer Attention Efficiency through Pruning Redundant Retrievals
Enhancing Layer Attention Efficiency through Pruning Redundant Retrievals
Hanze Li
Xiande Huang
39
0
0
09 Mar 2025
Towards High-performance Spiking Transformers from ANN to SNN Conversion
Towards High-performance Spiking Transformers from ANN to SNN Conversion
Zihan Huang
Xinyu Shi
Zecheng Hao
Tong Bu
Jianhao Ding
Zhaofei Yu
Tiejun Huang
28
7
0
28 Feb 2025
Prion-ViT: Prions-Inspired Vision Transformers for Temperature prediction with Specklegrams
Prion-ViT: Prions-Inspired Vision Transformers for Temperature prediction with Specklegrams
Abhishek Sebastian
Pragna R
Sonaa Rajagopal
Muralikrishnan Mani
53
0
0
28 Jan 2025
EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality
EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality
Sanghyeok Lee
Joonmyung Choi
Hyunwoo J. Kim
107
3
0
22 Nov 2024
Designing Concise ConvNets with Columnar Stages
Designing Concise ConvNets with Columnar Stages
Ashish Kumar
Jaesik Park
MQ
20
0
0
05 Oct 2024
MacFormer: Semantic Segmentation with Fine Object Boundaries
MacFormer: Semantic Segmentation with Fine Object Boundaries
Guoan Xu
Wenfeng Huang
Tao Wu
Ligeng Chen
Wenjing Jia
Guangwei Gao
Xiatian Zhu
Stuart W. Perry
24
0
0
11 Aug 2024
LightStereo: Channel Boost Is All You Need for Efficient 2D Cost Aggregation
LightStereo: Channel Boost Is All You Need for Efficient 2D Cost Aggregation
Xianda Guo
Chenming Zhang
Dujun Nie
Wenzhao Zheng
Youmin Zhang
Long Chen
Long Chen
18
7
0
28 Jun 2024
Pick-or-Mix: Dynamic Channel Sampling for ConvNets
Pick-or-Mix: Dynamic Channel Sampling for ConvNets
Ashish Kumar
Daneul Kim
Jaesik Park
Laxmidhar Behera
27
1
0
16 Jun 2024
WonderWorld: Interactive 3D Scene Generation from a Single Image
WonderWorld: Interactive 3D Scene Generation from a Single Image
Hong-Xing Yu
Haoyi Duan
Charles Herrmann
William T. Freeman
Jiajun Wu
3DGS
VGen
41
37
0
13 Jun 2024
Enhancing Efficiency in Vision Transformer Networks: Design Techniques
  and Insights
Enhancing Efficiency in Vision Transformer Networks: Design Techniques and Insights
Moein Heidari
Reza Azad
Sina Ghorbani Kolahi
René Arimond
Leon Niggemeier
...
Afshin Bozorgpour
Ehsan Khodapanah Aghdam
A. Kazerouni
I. Hacihaliloglu
Dorit Merhof
36
7
0
28 Mar 2024
Tiny Models are the Computational Saver for Large Models
Tiny Models are the Computational Saver for Large Models
Qingyuan Wang
B. Cardiff
Antoine Frappé
Benoît Larras
Deepu John
22
2
0
26 Mar 2024
FViT: A Focal Vision Transformer with Gabor Filter
FViT: A Focal Vision Transformer with Gabor Filter
Yulong Shi
Mingwei Sun
Yongshuai Wang
Rui Wang
47
4
0
17 Feb 2024
HEViTPose: High-Efficiency Vision Transformer for Human Pose Estimation
HEViTPose: High-Efficiency Vision Transformer for Human Pose Estimation
Chengpeng Wu
Guangxing Tan
Chunyu Li
ViT
19
0
0
22 Nov 2023
EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention
EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention
Yulong Shi
Mingwei Sun
Yongshuai Wang
Hui Sun
Zengqiang Chen
29
3
0
10 Oct 2023
TurboViT: Generating Fast Vision Transformers via Generative
  Architecture Search
TurboViT: Generating Fast Vision Transformers via Generative Architecture Search
Alexander Wong
Saad Abbasi
Saeejith Nair
ViT
13
1
0
22 Aug 2023
Transformer Quality in Linear Time
Transformer Quality in Linear Time
Weizhe Hua
Zihang Dai
Hanxiao Liu
Quoc V. Le
71
220
0
21 Feb 2022
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
  Transformer
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
189
1,148
0
05 Oct 2021
Mobile-Former: Bridging MobileNet and Transformer
Mobile-Former: Bridging MobileNet and Transformer
Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
ViT
172
462
0
12 Aug 2021
Combiner: Full Attention Transformer with Sparse Computation Cost
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren
H. Dai
Zihang Dai
Mengjiao Yang
J. Leskovec
Dale Schuurmans
Bo Dai
73
77
0
12 Jul 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDE
BDL
PINN
201
14,190
0
07 Oct 2016
1