ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.02960
  4. Cited By
GLiT: Neural Architecture Search for Global and Local Image Transformer

GLiT: Neural Architecture Search for Global and Local Image Transformer

7 July 2021
Boyu Chen
Peixia Li
Chuming Li
Baopu Li
Lei Bai
Chen Lin
Ming-hui Sun
Junjie Yan
Wanli Ouyang
    ViT
ArXivPDFHTML

Papers citing "GLiT: Neural Architecture Search for Global and Local Image Transformer"

50 / 55 papers shown
Title
Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights
Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights
Sy-Tuyen Ho
Tuan Van Vo
Somayeh Ebrahimkhani
Ngai-man Cheung
42
0
0
08 Jan 2025
Neural Architecture Search based Global-local Vision Mamba for Palm-Vein
  Recognition
Neural Architecture Search based Global-local Vision Mamba for Palm-Vein Recognition
Huafeng Qin
Yuming Fu
Jing Chen
M. El-Yacoubi
Xinbo Gao
Feng Xi
Mamba
42
1
0
11 Aug 2024
HyTAS: A Hyperspectral Image Transformer Architecture Search Benchmark
  and Analysis
HyTAS: A Hyperspectral Image Transformer Architecture Search Benchmark and Analysis
Fangqin Zhou
Mert Kilickaya
Joaquin Vanschoren
Ran Piao
21
1
0
23 Jul 2024
Lite-SAM Is Actually What You Need for Segment Everything
Lite-SAM Is Actually What You Need for Segment Everything
Jianhai Fu
Yuanjie Yu
Ningchuan Li
Yi Zhang
Qichao Chen
Jianping Xiong
Jun Yin
Zhiyu Xiang
VLM
39
4
0
12 Jul 2024
Automatic Channel Pruning for Multi-Head Attention
Automatic Channel Pruning for Multi-Head Attention
Eunho Lee
Youngbae Hwang
ViT
40
1
0
31 May 2024
A Timely Survey on Vision Transformer for Deepfake Detection
A Timely Survey on Vision Transformer for Deepfake Detection
Zhikan Wang
Zhongyao Cheng
Jiajie Xiong
Xun Xu
Tianrui Li
B. Veeravalli
Xulei Yang
34
5
0
14 May 2024
Once for Both: Single Stage of Importance and Sparsity Search for Vision
  Transformer Compression
Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
Hancheng Ye
Chong Yu
Peng Ye
Renqiu Xia
Yansong Tang
Jiwen Lu
Tao Chen
Bo-Wen Zhang
56
3
0
23 Mar 2024
General surgery vision transformer: A video pre-trained foundation model
  for general surgery
General surgery vision transformer: A video pre-trained foundation model for general surgery
Samuel Schmidgall
Ji Woong Kim
Jeffery Jopling
Axel Krieger
ViT
MedIm
36
5
0
09 Mar 2024
DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions
DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions
Guangrun Wang
Changlin Li
Liuchun Yuan
Jiefeng Peng
Xiaoyu Xian
Xiaodan Liang
Xiaojun Chang
Liang Lin
46
1
0
02 Mar 2024
A Survey on Transformer Compression
A Survey on Transformer Compression
Yehui Tang
Yunhe Wang
Jianyuan Guo
Zhijun Tu
Kai Han
Hailin Hu
Dacheng Tao
37
28
0
05 Feb 2024
FLORA: Fine-grained Low-Rank Architecture Search for Vision Transformer
FLORA: Fine-grained Low-Rank Architecture Search for Vision Transformer
Chi-Chih Chang
Yuan-Yao Sung
Shixing Yu
N. Huang
Diana Marculescu
Kai-Chiang Wu
ViT
28
1
0
07 Nov 2023
MCUFormer: Deploying Vision Transformers on Microcontrollers with
  Limited Memory
MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory
Yinan Liang
Ziwei Wang
Xiuwei Xu
Yansong Tang
Jie Zhou
Jiwen Lu
21
9
0
25 Oct 2023
Entropic Score metric: Decoupling Topology and Size in Training-free NAS
Entropic Score metric: Decoupling Topology and Size in Training-free NAS
Niccolò Cavagnero
Luc Robbiano
Francesca Pistilli
Barbara Caputo
Giuseppe Averta
20
2
0
06 Oct 2023
Evolutionary Neural Architecture Search for Transformer in Knowledge
  Tracing
Evolutionary Neural Architecture Search for Transformer in Knowledge Tracing
Shangshang Yang
Xiaoshan Yu
Ye Tian
Xueming Yan
Haiping Ma
Xingyi Zhang
ViT
KELM
AI4Ed
18
2
0
02 Oct 2023
RingMo-lite: A Remote Sensing Multi-task Lightweight Network with
  CNN-Transformer Hybrid Framework
RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
Yuelei Wang
Ting Zhang
Liangjin Zhao
Lin Hu
Zhechao Wang
...
Kaiqiang Chen
Xuan Zeng
Zhirui Wang
Hongqi Wang
Xian Sun
24
4
0
16 Sep 2023
DAT++: Spatially Dynamic Vision Transformer with Deformable Attention
DAT++: Spatially Dynamic Vision Transformer with Deformable Attention
Zhuofan Xia
Xuran Pan
Shiji Song
Li Erran Li
Gao Huang
ViT
27
24
0
04 Sep 2023
A Survey of Techniques for Optimizing Transformer Inference
A Survey of Techniques for Optimizing Transformer Inference
Krishna Teja Chitty-Venkata
Sparsh Mittal
M. Emani
V. Vishwanath
Arun Somani
43
62
0
16 Jul 2023
Auto-Spikformer: Spikformer Architecture Search
Auto-Spikformer: Spikformer Architecture Search
Kaiwei Che
Zhaokun Zhou
Zhengyu Ma
Wei Fang
Yanqing Chen
Shuaijie Shen
Liuliang Yuan
Yonghong Tian
29
4
0
01 Jun 2023
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
  Attention
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Xinyu Liu
Houwen Peng
Ningxin Zheng
Yuqing Yang
Han Hu
Yixuan Yuan
ViT
25
276
0
11 May 2023
GPT-NAS: Evolutionary Neural Architecture Search with the Generative Pre-Trained Model
GPT-NAS: Evolutionary Neural Architecture Search with the Generative Pre-Trained Model
Caiyang Yu
Xianggen Liu
Wentao Feng
Yun Liu
Wentao Feng
Deng Xiong
Chenwei Tang
Jiancheng Lv
21
1
0
09 May 2023
CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale
  Attention
CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale Attention
Wenxiao Wang
Wei Chen
Qibo Qiu
Long Chen
Boxi Wu
Binbin Lin
Xiaofei He
Wei Liu
32
38
0
13 Mar 2023
Full Stack Optimization of Transformer Inference: a Survey
Full Stack Optimization of Transformer Inference: a Survey
Sehoon Kim
Coleman Hooper
Thanakul Wattanawong
Minwoo Kang
Ruohan Yan
...
Qijing Huang
Kurt Keutzer
Michael W. Mahoney
Y. Shao
A. Gholami
MQ
36
101
0
27 Feb 2023
TFormer: A Transmission-Friendly ViT Model for IoT Devices
TFormer: A Transmission-Friendly ViT Model for IoT Devices
Zhichao Lu
Chuntao Ding
Felix Juefei Xu
Vishnu Naresh Boddeti
Shangguang Wang
Yun Yang
21
13
0
15 Feb 2023
UPop: Unified and Progressive Pruning for Compressing Vision-Language
  Transformers
UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Dachuan Shi
Chaofan Tao
Ying Jin
Zhendong Yang
Chun Yuan
Jiaqi Wang
VLM
ViT
28
38
0
31 Jan 2023
Neural Architecture Search: Insights from 1000 Papers
Neural Architecture Search: Insights from 1000 Papers
Colin White
Mahmoud Safari
R. Sukthanker
Binxin Ru
T. Elsken
Arber Zela
Debadeepta Dey
Frank Hutter
3DV
AI4CE
34
130
0
20 Jan 2023
Adaptively Clustering Neighbor Elements for Image-Text Generation
Adaptively Clustering Neighbor Elements for Image-Text Generation
Zihua Wang
Xu Yang
Hanwang Zhang
Haiyang Xu
Mingshi Yan
Feisi Huang
Yu Zhang
VLM
85
0
0
05 Jan 2023
HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision
  Transformers
HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers
Peiyan Dong
Mengshu Sun
Alec Lu
Yanyue Xie
Li-Yu Daisy Liu
...
Xin Meng
ZeLin Li
Xue Lin
Zhenman Fang
Yanzhi Wang
ViT
34
59
0
15 Nov 2022
RadFormer: Transformers with Global-Local Attention for Interpretable
  and Accurate Gallbladder Cancer Detection
RadFormer: Transformers with Global-Local Attention for Interpretable and Accurate Gallbladder Cancer Detection
Soumen Basu
Mayank Gupta
Pratyaksha Rana
Pankaj Gupta
Chetan Arora
ViT
MedIm
26
32
0
09 Nov 2022
In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze
  Estimation
In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation
Bolin Lai
Miao Liu
Fiona Ryan
James M. Rehg
ViT
40
33
0
08 Aug 2022
Neural Architecture Search on Efficient Transformers and Beyond
Neural Architecture Search on Efficient Transformers and Beyond
Zexiang Liu
Dong Li
Kaiyue Lu
Zhen Qin
Weixuan Sun
Jiacheng Xu
Yiran Zhong
35
19
0
28 Jul 2022
TinyViT: Fast Pretraining Distillation for Small Vision Transformers
TinyViT: Fast Pretraining Distillation for Small Vision Transformers
Kan Wu
Jinnian Zhang
Houwen Peng
Mengchen Liu
Bin Xiao
Jianlong Fu
Lu Yuan
ViT
15
245
0
21 Jul 2022
Earthformer: Exploring Space-Time Transformers for Earth System
  Forecasting
Earthformer: Exploring Space-Time Transformers for Earth System Forecasting
Zhihan Gao
Xingjian Shi
Hao Wang
Yi Zhu
Yuyang Wang
Mu Li
Dit-Yan Yeung
AI4TS
39
149
0
12 Jul 2022
EATFormer: Improving Vision Transformer Inspired by Evolutionary
  Algorithm
EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm
Jiangning Zhang
Xiangtai Li
Yabiao Wang
Chengjie Wang
Yibo Yang
Yong Liu
Dacheng Tao
ViT
34
32
0
19 Jun 2022
Fast Vision Transformers with HiLo Attention
Fast Vision Transformers with HiLo Attention
Zizheng Pan
Jianfei Cai
Bohan Zhuang
28
152
0
26 May 2022
Inception Transformer
Inception Transformer
Chenyang Si
Weihao Yu
Pan Zhou
Yichen Zhou
Xinchao Wang
Shuicheng Yan
ViT
26
187
0
25 May 2022
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Haoran You
Baopu Li
Huihong Shi
Y. Fu
Yingyan Lin
49
17
0
17 May 2022
SepViT: Separable Vision Transformer
SepViT: Separable Vision Transformer
Wei Li
Xing Wang
Xin Xia
Jie Wu
Jiashi Li
Xuefeng Xiao
Min Zheng
Shiping Wen
ViT
26
40
0
29 Mar 2022
Training-free Transformer Architecture Search
Training-free Transformer Architecture Search
Qinqin Zhou
Kekai Sheng
Xiawu Zheng
Ke Li
Xing Sun
Yonghong Tian
Jie Chen
Rongrong Ji
ViT
34
46
0
23 Mar 2022
Backbone is All Your Need: A Simplified Architecture for Visual Object
  Tracking
Backbone is All Your Need: A Simplified Architecture for Visual Object Tracking
Boyu Chen
Peixia Li
Lei Bai
Leixian Qiao
Qiuhong Shen
Bo-wen Li
Weihao Gan
Wei Wu
Wanli Ouyang
ViT
VOT
22
182
0
10 Mar 2022
Auto-scaling Vision Transformers without Training
Auto-scaling Vision Transformers without Training
Wuyang Chen
Wei Huang
Xianzhi Du
Xiaodan Song
Zhangyang Wang
Denny Zhou
ViT
32
23
0
24 Feb 2022
Mixing and Shifting: Exploiting Global and Local Dependencies in Vision
  MLPs
Mixing and Shifting: Exploiting Global and Local Dependencies in Vision MLPs
Huangjie Zheng
Pengcheng He
Weizhu Chen
Mingyuan Zhou
22
14
0
14 Feb 2022
Neural Architecture Search for Spiking Neural Networks
Neural Architecture Search for Spiking Neural Networks
Youngeun Kim
Yuhang Li
Hyoungseob Park
Yeshwanth Venkatesha
Priyadarshini Panda
26
88
0
23 Jan 2022
Vision Transformer Slimming: Multi-Dimension Searching in Continuous
  Optimization Space
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Arnav Chavan
Zhiqiang Shen
Zhuang Liu
Zechun Liu
Kwang-Ting Cheng
Eric P. Xing
ViT
33
70
0
03 Jan 2022
Vision Transformer with Deformable Attention
Vision Transformer with Deformable Attention
Zhuofan Xia
Xuran Pan
S. Song
Li Erran Li
Gao Huang
ViT
33
456
0
03 Jan 2022
An Image Patch is a Wave: Phase-Aware Vision MLP
An Image Patch is a Wave: Phase-Aware Vision MLP
Yehui Tang
Kai Han
Jianyuan Guo
Chang Xu
Yanxi Li
Chao Xu
Yunhe Wang
24
133
0
24 Nov 2021
Pruning Self-attentions into Convolutional Layers in Single Path
Pruning Self-attentions into Convolutional Layers in Single Path
Haoyu He
Jianfei Cai
Jing Liu
Zizheng Pan
Jing Zhang
Dacheng Tao
Bohan Zhuang
ViT
34
40
0
23 Nov 2021
Mesa: A Memory-saving Training Framework for Transformers
Mesa: A Memory-saving Training Framework for Transformers
Zizheng Pan
Peng Chen
Haoyu He
Jing Liu
Jianfei Cai
Bohan Zhuang
28
20
0
22 Nov 2021
Global Vision Transformer Pruning with Hessian-Aware Saliency
Global Vision Transformer Pruning with Hessian-Aware Saliency
Huanrui Yang
Hongxu Yin
Maying Shen
Pavlo Molchanov
Hai Helen Li
Jan Kautz
ViT
30
39
0
10 Oct 2021
BN-NAS: Neural Architecture Search with Batch Normalization
BN-NAS: Neural Architecture Search with Batch Normalization
Boyu Chen
Peixia Li
Baopu Li
Chen Lin
Chuming Li
Ming-hui Sun
Junjie Yan
Wanli Ouyang
29
30
0
16 Aug 2021
Less is More: Pay Less Attention in Vision Transformers
Less is More: Pay Less Attention in Vision Transformers
Zizheng Pan
Bohan Zhuang
Haoyu He
Jing Liu
Jianfei Cai
ViT
24
82
0
29 May 2021
12
Next