Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2012.03236
Cited By
v1
v2 (latest)
Cross-Layer Distillation with Semantic Calibration
AAAI Conference on Artificial Intelligence (AAAI), 2020
6 December 2020
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (75★)
Papers citing
"Cross-Layer Distillation with Semantic Calibration"
50 / 131 papers shown
Logit-Based Losses Limit the Effectiveness of Feature Knowledge Distillation
Nicholas Cooper
Lijun Chen
Sailesh Dwivedy
Danna Gurari
173
0
0
18 Nov 2025
Do Students Debias Like Teachers? On the Distillability of Bias Mitigation Methods
Jiali Cheng
Chirag Agarwal
Hadi Amiri
155
1
0
30 Oct 2025
CasPoinTr: Point Cloud Completion with Cascaded Networks and Knowledge Distillation
Yifan Yang
Yuxiang Yan
Boda Liu
Jian Pu
3DPC
143
0
0
27 Sep 2025
Cross-Architecture Distillation Made Simple with Redundancy Suppression
Weijia Zhang
Yuehao Liu
Wu Ran
Chao Ma
230
3
0
29 Jul 2025
Scaling and Distilling Transformer Models for sEMG
Nicholas Mehlman
Jean-Christophe Gagnon-Audet
Michael Shvartsman
Kelvin Niu
Alexander H. Miller
Shagun Sodhani
MedIm
222
1
0
29 Jul 2025
A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge
Tarique Dahri
Zulfiqar Ali Memon
Zhenyu Yu
Mohd Yamani Idna Idris
Sheheryar Khan
Sadiq Ahmad
Maged Shoman
Saddam Aziz
Rizwan Qureshi
260
0
0
08 Jun 2025
SST: Self-training with Self-adaptive Thresholding for Semi-supervised Learning
Information Processing & Management (IPM), 2025
Shuai Zhao
Heyan Huang
Xinge Li
Xiaokang Chen
Rui Wang
252
3
0
31 May 2025
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective
Yuanhong Zhang
Muyao Yuan
Weizhan Zhang
Tieliang Gong
Wen Wen
Jiangyong Ying
Weijie Shi
VLM
266
0
0
28 May 2025
Model Stitching by Functional Latent Alignment
Ioannis Athanasiadis
Anmar Karmush
Michael Felsberg
303
1
0
26 May 2025
JointDistill: Adaptive Multi-Task Distillation for Joint Depth Estimation and Scene Segmentation
Tiancong Cheng
Ying Zhang
Yuxuan Liang
Roger Zimmermann
Zhiwen Yu
Bin Guo
VLM
228
0
0
15 May 2025
DNAD: Differentiable Neural Architecture Distillation
Xuan Rao
Bo Zhao
Derong Liu
463
2
0
25 Apr 2025
Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation
AAAI Conference on Artificial Intelligence (AAAI), 2025
Yuanmin Huang
Kai Hu
Yuhui Zhang
Z. Chen
Xieping Gao
337
4
0
10 Apr 2025
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
638
3
0
28 Feb 2025
Dynamic Frequency-Adaptive Knowledge Distillation for Speech Enhancement
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
Xihao Yuan
Siqi Liu
Hanting Chen
Lu Zhou
Jian Li
Jie Hu
191
5
0
07 Feb 2025
Soft Knowledge Distillation with Multi-Dimensional Cross-Net Attention for Image Restoration Models Compression
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
Yongheng Zhang
Danfeng Yan
184
2
0
17 Jan 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
377
0
0
13 Jan 2025
Cross-View Consistency Regularisation for Knowledge Distillation
ACM Multimedia (MM), 2024
W. Zhang
Dongnan Liu
Weidong Cai
Chao Ma
525
11
0
21 Dec 2024
Hybrid Data-Free Knowledge Distillation
AAAI Conference on Artificial Intelligence (AAAI), 2024
Jialiang Tang
Shuo Chen
Chen Gong
DD
254
1
0
18 Dec 2024
Neural Collapse Inspired Knowledge Distillation
AAAI Conference on Artificial Intelligence (AAAI), 2024
Shuoxi Zhang
Zijian Song
Kun He
515
1
0
16 Dec 2024
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models
North American Chapter of the Association for Computational Linguistics (NAACL), 2024
Jahyun Koo
Yerin Hwang
Yongil Kim
Taegwan Kang
Hyunkyung Bae
Kyomin Jung
461
2
0
25 Oct 2024
Breaking Modality Gap in RGBT Tracking: Coupled Knowledge Distillation
ACM Multimedia (MM), 2024
Andong Lu
Jiacong Zhao
Chenglong Li
Yun Xiao
Bin Luo
349
20
0
15 Oct 2024
LOBG:Less Overfitting for Better Generalization in Vision-Language Model
Chenhao Ding
Xinyuan Gao
Songlin Dong
Yuhang He
Qiang Wang
Alex C. Kot
Yihong Gong
VLM
271
1
0
14 Oct 2024
Distilling Invariant Representations with Dual Augmentation
Nikolaos Giakoumoglou
Tania Stathaki
385
0
0
12 Oct 2024
Conditional Image Synthesis with Diffusion Models: A Survey
Zheyuan Zhan
Defang Chen
Jian-Ping Mei
Zhenghe Zhao
Jiawei Chen
Chun-Yen Chen
Siwei Lyu
Can Wang
VLM
539
26
0
28 Sep 2024
Applications of Knowledge Distillation in Remote Sensing: A Survey
Information Fusion (Inf. Fusion), 2024
Yassine Himeur
N. Aburaed
O. Elharrouss
Iraklis Varlamis
Shadi Atalla
Shadi Atalla
Hussain Al Ahmad
316
15
0
18 Sep 2024
Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning
International Conference on Learning Representations (ICLR), 2024
Amin Karimi Monsefi
Mengxi Zhou
Nastaran Karimi Monsefi
Ser-Nam Lim
Wei-Lun Chao
R. Ramnath
440
6
0
16 Sep 2024
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
Kangkai Zhang
Shiming Ge
Ruixin Shi
Dan Zeng
239
23
0
04 Sep 2024
Low-Resolution Face Recognition via Adaptable Instance-Relation Distillation
IEEE International Joint Conference on Neural Network (IJCNN), 2024
Ruixin Shi
Weijia Guo
Shiming Ge
CVBM
315
1
0
03 Sep 2024
Make a Strong Teacher with Label Assistance: A Novel Knowledge Distillation Approach for Semantic Segmentation
Shoumeng Qiu
Jie Chen
Xinrun Li
Ru Wan
Xiangyang Xue
Jian Pu
VLM
342
9
0
18 Jul 2024
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
740
2
0
16 Jul 2024
A Survey on Symbolic Knowledge Distillation of Large Language Models
Kamal Acharya
Alvaro Velasquez
Haoze Song
SyDa
345
31
0
12 Jul 2024
3M-Health: Multimodal Multi-Teacher Knowledge Distillation for Mental Health Detection
R. Cabral
Siwen Luo
Josiah Poon
S. Han
273
4
0
12 Jul 2024
Reprogramming Distillation for Medical Foundation Models
Yuhang Zhou
Siyuan Du
Haolin Li
Jiangchao Yao
Ya Zhang
Yanfeng Wang
351
3
0
09 Jul 2024
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han
Qifan Wang
S. Dianat
Majid Rabbani
Raghuveer M. Rao
Yi Fang
Qiang Guan
Lifu Huang
Dongfang Liu
VLM
238
18
0
05 Jul 2024
SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation
Wenhui Zhu
Xiwen Chen
Peijie Qiu
Mohammad Farazi
Aristeidis Sotiras
Abolfazl Razi
Yalin Wang
SSeg
SSL
327
38
0
21 Jun 2024
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
271
4
0
17 Jun 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
282
0
0
12 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
467
6
0
12 Jun 2024
Lightweight Deep Learning for Resource-Constrained Environments: A Survey
Hou-I Liu
Marco Galindo
Hongxia Xie
Lai-Kuan Wong
Hong-Han Shuai
Yung-Hui Li
Wen-Huang Cheng
425
201
0
08 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
316
0
0
04 Apr 2024
Improve Knowledge Distillation via Label Revision and Data Selection
IEEE Transactions on Cognitive and Developmental Systems (IEEE TCDS), 2024
Weichao Lan
Yiu-ming Cheung
Qing Xu
Buhua Liu
Zhikai Hu
Mengke Li
Zhenghua Chen
327
7
0
03 Apr 2024
Scale Decoupled Distillation
Shicai Wei
368
21
0
20 Mar 2024
TIE-KD: Teacher-Independent and Explainable Knowledge Distillation for Monocular Depth Estimation
Sangwon Choi
Daejune Choi
Duksu Kim
189
9
0
22 Feb 2024
Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognition
Yijie Wang
Mingjian Hong
Luwen Huangfu
Shengyue Huang
SyDa
261
20
0
18 Feb 2024
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation
Ayan Banerjee
Sanket Biswas
Josep Lladós
Umapada Pal
318
4
0
17 Feb 2024
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLM
MQ
344
2
0
17 Feb 2024
FedD2S: Personalized Data-Free Federated Knowledge Distillation
Kawa Atapour
S. J. Seyedmohammadi
J. Abouei
Arash Mohammadi
Konstantinos N. Plataniotis
FedML
260
6
0
16 Feb 2024
NutePrune: Efficient Progressive Pruning with Numerous Teachers for Large Language Models
Shengrui Li
Junzhe Chen
Xueting Han
Jing Bai
313
8
0
15 Feb 2024
Cooperative Knowledge Distillation: A Learner Agnostic Approach
Michael J. Livanos
Ian Davidson
Stephen Wong
212
2
0
02 Feb 2024
A Deep Hierarchical Feature Sparse Framework for Occluded Person Re-Identification
Yihu Song
Shuaishi Liu
227
4
0
15 Jan 2024
1
2
3
Next
Page 1 of 3