Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.10793
Cited By
Knowledge Distillation via the Target-aware Transformer
22 May 2022
Sihao Lin
Hongwei Xie
Bing Wang
Kaicheng Yu
Xiaojun Chang
Xiaodan Liang
G. Wang
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Distillation via the Target-aware Transformer"
50 / 59 papers shown
Title
Mix-QSAM: Mixed-Precision Quantization of the Segment Anything Model
Navin Ranjan
Andreas E. Savakis
MQ
VLM
61
0
0
08 May 2025
Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression
Dohyun Kim
S. Park
Geonhee Han
Seung Wook Kim
Paul Hongsuck Seo
DiffM
47
0
0
02 Apr 2025
Delving Deep into Semantic Relation Distillation
Zhaoyi Yan
Kangjun Liu
Qixiang Ye
51
0
0
27 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
71
0
0
28 Feb 2025
A Transformer-in-Transformer Network Utilizing Knowledge Distillation for Image Recognition
Dewan Tauhid Rahman
Yeahia Sarker
Antar Mazumder
Md. Shamim Anower
ViT
41
0
0
24 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
33
0
0
10 Feb 2025
Mix-QViT: Mixed-Precision Vision Transformer Quantization Driven by Layer Importance and Quantization Sensitivity
Navin Ranjan
Andreas E. Savakis
MQ
38
1
0
10 Jan 2025
CLFace: A Scalable and Resource-Efficient Continual Learning Framework for Lifelong Face Recognition
Md Golam Moula Mehedi Hasan
S. Sami
Nasser M. Nasrabadi
CLL
60
0
0
21 Nov 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
34
0
0
30 Sep 2024
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
24
0
0
27 Sep 2024
Computer Vision Model Compression Techniques for Embedded Systems: A Survey
Alexandre Lopes
Fernando Pereira dos Santos
D. Oliveira
Mauricio Schiezaro
Hélio Pedrini
26
5
0
15 Aug 2024
An approach to optimize inference of the DIART speaker diarization pipeline
Roman Aperdannier
Sigurd Schacht
Alexander Piazza
29
0
0
05 Aug 2024
An Attention-based Representation Distillation Baseline for Multi-Label Continual Learning
Martin Menabue
Emanuele Frascaroli
Matteo Boschini
Lorenzo Bonicelli
Angelo Porrello
Simone Calderara
CLL
16
0
0
19 Jul 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
42
0
0
25 Jun 2024
Estimating Human Poses Across Datasets: A Unified Skeleton and Multi-Teacher Distillation Approach
Muhammad Gul Zain Ali Khan
Dhavalkumar Limbachiya
Didier Stricker
Muhammad Zeshan Afzal
3DH
21
0
0
30 May 2024
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures
Hongjun Wu
Li Xiao
Xingkuo Zhang
Yining Miao
36
1
0
28 May 2024
LoReTrack: Efficient and Accurate Low-Resolution Transformer Tracking
Shaohua Dong
Yunhe Feng
Qing Yang
Yuewei Lin
Heng Fan
33
1
0
27 May 2024
Deep video representation learning: a survey
Elham Ravanbakhsh
Yongqing Liang
J. Ramanujam
Xin Li
41
3
0
10 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Min-man Wu
Xiaoli Li
31
1
0
09 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
56
1
0
22 Apr 2024
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
Yuxuan Jiang
Chen Feng
Fan Zhang
David Bull
SupR
43
11
0
15 Apr 2024
Lightweight Deep Learning for Resource-Constrained Environments: A Survey
Hou-I Liu
Marco Galindo
Hongxia Xie
Lai-Kuan Wong
Hong-Han Shuai
Yung-Hui Li
Wen-Huang Cheng
50
47
0
08 Apr 2024
Ranking Distillation for Open-Ended Video Question Answering with Insufficient Labels
Tianming Liang
Chaolei Tan
Beihao Xia
Wei-Shi Zheng
Jianfang Hu
30
1
0
21 Mar 2024
Logit Standardization in Knowledge Distillation
Shangquan Sun
Wenqi Ren
Jingzhi Li
Rui Wang
Xiaochun Cao
32
55
0
03 Mar 2024
LRP-QViT: Mixed-Precision Vision Transformer Quantization via Layer-wise Relevance Propagation
Navin Ranjan
Andreas E. Savakis
MQ
19
6
0
20 Jan 2024
Video Recognition in Portrait Mode
Mingfei Han
Linjie Yang
Xiaojie Jin
Jiashi Feng
Xiaojun Chang
Heng Wang
23
3
0
21 Dec 2023
Weight subcloning: direct initialization of transformers using larger pretrained ones
Mohammad Samragh
Mehrdad Farajtabar
Sachin Mehta
Raviteja Vemulapalli
Fartash Faghri
Devang Naik
Oncel Tuzel
Mohammad Rastegari
16
25
0
14 Dec 2023
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
Yi Guo
Yiqian He
Xiaoyang Li
Haotong Qin
Van Tung Pham
Yang Zhang
Shouda Liu
37
1
0
14 Dec 2023
Generative Model-based Feature Knowledge Distillation for Action Recognition
Guiqin Wang
Peng Zhao
Yanjiang Shi
Cong Zhao
Shusen Yang
VLM
35
3
0
14 Dec 2023
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction
Mohamed El Amine Elforaici
E. Montagnon
Francisco Perdigon Romero
W. Le
F. Azzi
Dominique Trudel
Bich Nguyen
Simon Turcotte
An Tang
Samuel Kadoury
MedIm
21
2
0
17 Nov 2023
DEED: Dynamic Early Exit on Decoder for Accelerating Encoder-Decoder Transformer Models
Peng Tang
Pengkai Zhu
Tian Li
Srikar Appalaraju
Vijay Mahadevan
R. Manmatha
32
7
0
15 Nov 2023
Mask Propagation for Efficient Video Semantic Segmentation
Yuetian Weng
Mingfei Han
Haoyu He
Mingjie Li
Lina Yao
Xiaojun Chang
Bohan Zhuang
16
9
0
29 Oct 2023
DomainAdaptor: A Novel Approach to Test-time Adaptation
Jian Zhang
Lei Qi
Yinghuan Shi
Yang Gao
OOD
TTA
8
16
0
20 Aug 2023
Influence Function Based Second-Order Channel Pruning-Evaluating True Loss Changes For Pruning Is Possible Without Retraining
Hongrong Cheng
Miao Zhang
Javen Qinfeng Shi
AAML
20
1
0
13 Aug 2023
NormKD: Normalized Logits for Knowledge Distillation
Zhihao Chi
Tu Zheng
Hengjia Li
Zheng Yang
Boxi Wu
Binbin Lin
D. Cai
25
13
0
01 Aug 2023
Review of Large Vision Models and Visual Prompt Engineering
Jiaqi Wang
Zheng Liu
Lin Zhao
Zihao Wu
Chong Ma
...
Bao Ge
Yixuan Yuan
Dinggang Shen
Tianming Liu
Shu Zhang
VLM
LRM
51
145
0
03 Jul 2023
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
86
21
0
19 Jun 2023
Revising deep learning methods in parking lot occupancy detection
A. Martynova
Mikhail K. Kuznetsov
Vadim Porvatov
Vladislav Tishin
Andrey Kuznetsov
Natalia Semenova
Ksenia G Kuznetsova
37
5
0
07 Jun 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
30
16
0
25 May 2023
BinaryViT: Towards Efficient and Accurate Binary Vision Transformers
Junrui Xiao
Zhikai Li
Lianwei Yang
Qingyi Gu
MQ
ViT
30
2
0
24 May 2023
Soft Prompt Decoding for Multilingual Dense Retrieval
Zhiqi Huang
Hansi Zeng
Hamed Zamani
James Allan
RALM
63
12
0
15 May 2023
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
Transformer-based models and hardware acceleration analysis in autonomous driving: A survey
J. Zhong
Zheng Liu
Xiangshan Chen
ViT
28
16
0
21 Apr 2023
Distilling Token-Pruned Pose Transformer for 2D Human Pose Estimation
Feixiang Ren
ViT
13
2
0
12 Apr 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
33
56
0
28 Mar 2023
A Simple and Generic Framework for Feature Distillation via Channel-wise Transformation
Ziwei Liu
Yongtao Wang
Xiaojie Chu
24
5
0
23 Mar 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
19
72
0
23 Mar 2023
Channel-Aware Distillation Transformer for Depth Estimation on Nano Drones
Ning Zhang
F. Nex
G. Vosselman
N. Kerle
15
1
0
18 Mar 2023
Knowledge Distillation in Vision Transformers: A Critical Review
Gousia Habib
Tausifa Jan Saleem
Brejesh Lall
16
15
0
04 Feb 2023
Improving Cross-lingual Information Retrieval on Low-Resource Languages via Optimal Transport Distillation
Zhiqi Huang
Puxuan Yu
James Allan
VLM
19
26
0
29 Jan 2023
1
2
Next