Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.13005
Cited By
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
23 March 2023
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels"
42 / 42 papers shown
Title
Replay-Based Continual Learning with Dual-Layered Distillation and a Streamlined U-Net for Efficient Text-to-Image Generation
Md. Naimur Asif Borno
Md Sakib Hossain Shovon
Asmaa Soliman Al-Moisheer
Mohammad Ali Moni
24
0
0
11 May 2025
Image Recognition with Online Lightweight Vision Transformer: A Survey
Zherui Zhang
Rongtao Xu
Jie Zhou
Changwei Wang
Xingtian Pei
...
Jiguang Zhang
Li Guo
Longxiang Gao
W. Xu
Shibiao Xu
ViT
54
0
0
06 May 2025
Sample-level Adaptive Knowledge Distillation for Action Recognition
Ping Li
Chenhao Ping
Wenxiao Wang
Mingli Song
49
0
0
01 Apr 2025
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
68
0
0
28 Feb 2025
Cross-View Consistency Regularisation for Knowledge Distillation
W. Zhang
Dongnan Liu
Weidong Cai
Chao Ma
68
1
0
21 Dec 2024
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation
Jiaming Lv
Haoyuan Yang
P. Li
74
1
0
11 Dec 2024
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment
Chengting Yu
Fengzhao Zhang
Ruizhe Chen
Zuozhu Liu
Shurun Tan
Er-ping Li
Aili Wang
28
2
0
03 Nov 2024
KA
2
^2
2
ER: Knowledge Adaptive Amalgamation of ExpeRts for Medical Images Segmentation
Shangde Gao
Yichao Fu
Ke Liu
Hongxia Xu
Jian Wu
MedIm
25
1
0
28 Oct 2024
CAKD: A Correlation-Aware Knowledge Distillation Framework Based on Decoupling Kullback-Leibler Divergence
Zao Zhang
Huaming Chen
Pei Ning
Nan Yang
Dong Yuan
21
1
0
17 Oct 2024
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching
Wenqi Niu
Yingchao Wang
Guohui Cai
Hanpo Hou
19
0
0
09 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
30
0
0
05 Oct 2024
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
24
0
0
27 Sep 2024
Applications of Knowledge Distillation in Remote Sensing: A Survey
Yassine Himeur
N. Aburaed
O. Elharrouss
Iraklis Varlamis
Shadi Atalla
W. Mansoor
Hussain Al Ahmad
29
4
0
18 Sep 2024
Unleashing the Potential of Mamba: Boosting a LiDAR 3D Sparse Detector by Using Cross-Model Knowledge Distillation
Rui Yu
Runkai Zhao
Jiagen Li
Qingsong Zhao
Songhao Zhu
HuaiCheng Yan
Meng Wang
Mamba
29
3
0
17 Sep 2024
LoCa: Logit Calibration for Knowledge Distillation
Runming Yang
Taiqiang Wu
Yujiu Yang
30
0
0
07 Sep 2024
Adaptive Explicit Knowledge Transfer for Knowledge Distillation
H. Park
Jong-seok Lee
19
0
0
03 Sep 2024
LAKD-Activation Mapping Distillation Based on Local Learning
Yaoze Zhang
Yuming Zhang
Yu Zhao
Yue Zhang
Feiyu Zhu
21
0
0
21 Aug 2024
Computer Vision Model Compression Techniques for Embedded Systems: A Survey
Alexandre Lopes
Fernando Pereira dos Santos
D. Oliveira
Mauricio Schiezaro
Hélio Pedrini
26
5
0
15 Aug 2024
Optimizing Vision Transformers with Data-Free Knowledge Transfer
Gousia Habib
Damandeep Singh
I. Malik
Brejesh Lall
35
1
0
12 Aug 2024
DDK: Distilling Domain Knowledge for Efficient Large Language Models
Jiaheng Liu
Chenchen Zhang
Jinyang Guo
Yuanxing Zhang
Haoran Que
...
Congnan Liu
Wenbo Su
Jiamang Wang
Lin Qu
Bo Zheng
43
3
0
23 Jul 2024
BiLD: Bi-directional Logits Difference Loss for Large Language Model Distillation
Minchong Li
Feng Zhou
Xiaohui Song
29
2
0
19 Jun 2024
Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification
Xiaoxiong Zhang
Zhiwei Zeng
Xin Zhou
Dusit Niyato
Zhiqi Shen
FedML
24
1
0
19 Jun 2024
Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networks
Lin Zuo
Yongqi Ding
Mengmeng Jing
Kunshan Yang
Yunqian Yu
41
3
0
12 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
41
2
0
12 Jun 2024
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
Xinyi Shang
Peng Sun
Tao Lin
45
2
0
23 May 2024
ESP-Zero: Unsupervised enhancement of zero-shot classification for Extremely Sparse Point cloud
Jiayi Han
Zidi Cao
Weibo Zheng
Xiangguo Zhou
Xiangjian He
Yuanfang Zhang
Daisen Wei
3DPC
39
0
0
30 Apr 2024
Low-Rank Knowledge Decomposition for Medical Foundation Models
Yuhang Zhou
Haolin Li
Siyuan Du
Jiangchao Yao
Ya-Qin Zhang
Yanfeng Wang
21
3
0
26 Apr 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
53
1
0
22 Apr 2024
MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities
Kunxi Li
Tianyu Zhan
Kairui Fu
Shengyu Zhang
Kun Kuang
Jiwei Li
Zhou Zhao
Fei Wu
MoMe
22
0
0
20 Apr 2024
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
27
0
0
21 Mar 2024
Scale Decoupled Distillation
Shicai Wei
28
4
0
20 Mar 2024
V
k
D
:
V_kD:
V
k
D
:
Improving Knowledge Distillation using Orthogonal Projections
Roy Miles
Ismail Elezi
Jiankang Deng
39
9
0
10 Mar 2024
Weakly Supervised Monocular 3D Detection with a Single-View Image
Xue-Qiu Jiang
Sheng Jin
Lewei Lu
Xiaoqin Zhang
Shijian Lu
51
6
0
29 Feb 2024
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation
Ayan Banerjee
Sanket Biswas
Josep Lladós
Umapada Pal
30
1
0
17 Feb 2024
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
19
16
0
17 Feb 2024
Rethinking Centered Kernel Alignment in Knowledge Distillation
Zikai Zhou
Yunhang Shen
Shitong Shao
Linrui Gong
Shaohui Lin
19
1
0
22 Jan 2024
GIST: Improving Parameter Efficient Fine Tuning via Knowledge Interaction
Jiacheng Ruan
Jingsheng Gao
Mingye Xie
Suncheng Xiang
Zefang Yu
Ting Liu
Yuzhuo Fu
MoE
35
4
0
12 Dec 2023
Effective Whole-body Pose Estimation with Two-stages Distillation
Zhendong Yang
Ailing Zeng
Chun Yuan
Yu Li
26
151
0
29 Jul 2023
Understanding the Role of the Projector in Knowledge Distillation
Roy Miles
K. Mikolajczyk
14
20
0
20 Mar 2023
ViTKD: Practical Guidelines for ViT feature knowledge distillation
Zhendong Yang
Zhe Li
Ailing Zeng
Zexian Li
Chun Yuan
Yu Li
84
42
0
06 Sep 2022
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
416
0
19 Apr 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
1