Communities
Connect sessions
AI calendar
Organizations
Contact Sales
Search
Open menu
Home
Papers
All Papers
Title
Home
Papers
2003.03622
Cited By
Explaining Knowledge Distillation by Quantifying the Knowledge
Computer Vision and Pattern Recognition (CVPR), 2025
7 March 2020
Xu Cheng
Zhefan Rao
Yilan Chen
Quanshi Zhang
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Explaining Knowledge Distillation by Quantifying the Knowledge"
50 / 60 papers shown
Title
TopKD: Top-scaled Knowledge Distillation
Qi Wang
Jinjia Zhou
35
0
0
06 Aug 2025
A Transformer-in-Transformer Network Utilizing Knowledge Distillation for Image Recognition
Dewan Tauhid Rahman
Yeahia Sarker
Antar Mazumder
Md. Shamim Anower
ViT
101
0
0
24 Feb 2025
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment
Chengting Yu
Fengzhao Zhang
Ruizhe Chen
Zuozhu Liu
Shurun Tan
Er-ping Li
Aili Wang
167
4
0
03 Nov 2024
Multi-Level Feature Distillation of Joint Teachers Trained on Distinct Image Datasets
Adrian Iordache
B. Alexe
Radu Tudor Ionescu
167
2
0
29 Oct 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
178
0
0
25 Jun 2024
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch
Wen-Shu Fan
Xin-Chun Li
Bowen Tao
130
2
0
21 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
220
2
0
22 Apr 2024
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers
Diana-Nicoleta Grigore
Mariana-Iuliana Georgescu
J. A. Justo
T. Johansen
Andreea-Iuliana Ionescu
Radu Tudor Ionescu
137
1
0
14 Apr 2024
Enhancing Metaphor Detection through Soft Labels and Target Word Prediction
Kaidi Jia
Rongsheng Li
76
0
0
27 Mar 2024
Enabling Generalized Zero-shot Learning Towards Unseen Domains by Intrinsic Learning from Redundant LLM Semantics
Jiaqi Yue
Jiancheng Zhao
Chunhui Zhao
Biao Huang
110
0
0
21 Mar 2024
Two-Stage Multi-task Self-Supervised Learning for Medical Image Segmentation
Binyan Hu
•. A. K. Qin
SSL
71
0
0
11 Feb 2024
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu
Michael I. Jordan
Jiantao Jiao
120
40
0
29 Jan 2024
Zone Evaluation: Revealing Spatial Bias in Object Detection
Zhaohui Zheng
Yuming Chen
Qibin Hou
Xiang Li
Ping Wang
Ming-Ming Cheng
ObjD
148
6
0
20 Oct 2023
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
Qingyue Zhao
Banghua Zhu
176
4
0
11 Oct 2023
Computation-efficient Deep Learning for Computer Vision: A Survey
Yulin Wang
Yizeng Han
Chaofei Wang
Shiji Song
Qi Tian
Gao Huang
VLM
186
26
0
27 Aug 2023
Dissecting RGB-D Learning for Improved Multi-modal Fusion
Hao Chen
Hao Zhou
Yunshu Zhang
Zheng Lin
Yongjian Deng
211
1
0
19 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
144
26
0
08 Aug 2023
Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search
Hai-Jian Ke
Ming Pang
Zheng Fang
Xue Jiang
Xi-Wei Zhao
Changping Peng
Zhangang Lin
Jinghe Hu
Jingping Shao
189
0
0
02 Aug 2023
Frameless Graph Knowledge Distillation
Dai Shi
Zhiqi Shao
Yi Guo
Junbin Gao
92
4
0
13 Jul 2023
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
103
1
0
07 Jul 2023
Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat
Shantanu Ghosh
K. Yu
Forough Arabshahi
Kayhan Batmanghelich
MoE
128
14
0
07 Jul 2023
Leveraging Synthetic Targets for Machine Translation
Sarthak Mittal
Oleksii Hrinchuk
Oleksii Kuchaiev
86
2
0
07 May 2023
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
76
1
0
24 Apr 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
147
16
0
16 Mar 2023
Graph-based Knowledge Distillation: A survey and experimental evaluation
Jing Liu
Tongya Zheng
Guanzheng Zhang
Qinfen Hao
117
10
0
27 Feb 2023
Practical Knowledge Distillation: Using DNNs to Beat DNNs
Chungman Lee
Pavlos Anastasios Apostolopulos
Igor L. Markov
FedML
106
1
0
23 Feb 2023
Understanding Self-Distillation in the Presence of Label Noise
Rudrajit Das
Sujay Sanghavi
159
19
0
30 Jan 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
158
39
0
28 Oct 2022
On effects of Knowledge Distillation on Transfer Learning
Sushil Thapa
64
2
0
18 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
168
32
0
10 Oct 2022
Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
Quanshi Zhang
Xu Cheng
Yilan Chen
Zhefan Rao
101
39
0
18 Aug 2022
Explaining Deepfake Detection by Analysing Image Matching
S. Dong
Jin Wang
Jiajun Liang
Haoqiang Fan
Renhe Ji
96
58
0
20 Jul 2022
Narrowing the Coordinate-frame Gap in Behavior Prediction Models: Distillation for Efficient and Accurate Scene-centric Motion Forecasting
DiJia Su
B. Douillard
Rami Al-Rfou
C. Park
Benjamin Sapp
115
10
0
08 Jun 2022
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks
Bin Hu
Yu Sun
•. A. K. Qin
AI4CE
101
0
0
29 May 2022
Heterogeneous Collaborative Learning for Personalized Healthcare Analytics via Messenger Distillation
Guanhua Ye
Tong Chen
Yawen Li
Li-zhen Cui
Quoc Viet Hung Nguyen
Hongzhi Yin
154
9
0
27 May 2022
Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation
Huarui He
Jie Wang
Zhanqiu Zhang
Feng Wu
106
46
0
24 May 2022
Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization
Yangcheng Gao
Zhao Zhang
Richang Hong
Haijun Zhang
Jicong Fan
Shuicheng Yan
MQ
101
10
0
30 Apr 2022
Decoupled Knowledge Distillation
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
194
628
0
16 Mar 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
287
480
0
20 Jan 2022
Class-Incremental Continual Learning into the eXtended DER-verse
Matteo Boschini
Lorenzo Bonicelli
Pietro Buzzega
Angelo Porrello
Simone Calderara
CLL
BDL
141
154
0
03 Jan 2022
Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and Curriculum Learning
Shengjia Zhang
Tiancheng Lin
Yi Tian Xu
142
5
0
03 Dec 2021
Semi-supervised Domain Adaptation via Sample-to-Sample Self-Distillation
Jeongbeen Yoon
Dahyun Kang
Minsu Cho
TTA
107
44
0
29 Nov 2021
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
Mingjie Li
Shaobo Wang
Quanshi Zhang
139
11
0
05 Nov 2021
Instance-Conditional Knowledge Distillation for Object Detection
Zijian Kang
Peizhen Zhang
Xinming Zhang
Jian Sun
N. Zheng
157
80
0
25 Oct 2021
Visualizing the embedding space to explain the effect of knowledge distillation
Hyun Seung Lee
C. Wallraven
103
1
0
09 Oct 2021
Partial to Whole Knowledge Distillation: Progressive Distilling Decomposed Knowledge Boosts Student Better
Xuanyang Zhang
Xinming Zhang
Jian Sun
91
1
0
26 Sep 2021
Efficient Action Recognition Using Confidence Distillation
Shervin Manzuri Shalmani
Fei Chiang
Ronghuo Zheng
174
6
0
05 Sep 2021
Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation
Yufei Wang
Haoliang Li
Lap-pui Chau
Alex C. Kot
FedML
90
47
0
06 Jul 2021
Dynamic Knowledge Distillation With Noise Elimination for RGB-D Salient Object Detection
Guangyu Ren
Yinxiao Yu
Hengyan Liu
Tania Stathaki
134
7
0
17 Jun 2021
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
110
32
0
20 Apr 2021
1
2
Next