Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.15835
Cited By
Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
23 March 2024
Hancheng Ye
Chong Yu
Peng Ye
Renqiu Xia
Yansong Tang
Jiwen Lu
Tao Chen
Bo-Wen Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression"
6 / 6 papers shown
Title
Efficient Architecture Search via Bi-level Data Pruning
Chongjun Tu
Peng Ye
Weihao Lin
Hancheng Ye
Chong Yu
Tao Chen
Baopu Li
Wanli Ouyang
24
2
0
21 Dec 2023
DepGraph: Towards Any Structural Pruning
Gongfan Fang
Xinyin Ma
Mingli Song
Michael Bi Mi
Xinchao Wang
GNN
79
245
0
30 Jan 2023
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Renrui Zhang
Ziyu Guo
Rongyao Fang
Bingyan Zhao
Dong Wang
Yu Qiao
Hongsheng Li
Peng Gao
3DPC
167
241
0
28 May 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
282
1,490
0
27 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
1