Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.07265
Cited By
Distilling Efficient Vision Transformers from CNNs for Semantic Segmentation
11 October 2023
Xueye Zheng
Yunhao Luo
Pengyuan Zhou
Lin Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distilling Efficient Vision Transformers from CNNs for Semantic Segmentation"
8 / 8 papers shown
Title
Segment Any RGB-Thermal Model with Language-aided Distillation
Dong Xing
Xianxun Zhu
Wei Zhou
Qika Lin
Hang Yang
Yuqing Wang
VLM
51
0
0
04 May 2025
PreFallKD: Pre-Impact Fall Detection via CNN-ViT Knowledge Distillation
Tin-Han Chi
Kai-Chun Liu
Chia-Yeh Hsieh
Yu Tsao
Chia-Tai Chan
26
12
0
07 Mar 2023
ViTKD: Practical Guidelines for ViT feature knowledge distillation
Zhendong Yang
Zhe Li
Ailing Zeng
Zexian Li
Chun Yuan
Yu Li
84
42
0
06 Sep 2022
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the Best of Both Students
Xueye Zheng
Yuan Luo
Hao Wang
Chong Fu
Lin Wang
ViT
36
17
0
06 Sep 2022
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
178
256
0
10 Nov 2021
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
248
618
0
21 May 2021
Visformer: The Vision-friendly Transformer
Zhengsu Chen
Lingxi Xie
Jianwei Niu
Xuefeng Liu
Longhui Wei
Qi Tian
ViT
109
206
0
26 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,604
0
24 Feb 2021
1