ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01348
  4. Cited By
On the Efficacy of Knowledge Distillation

On the Efficacy of Knowledge Distillation

3 October 2019
Ligang He
Rui Mao
ArXivPDFHTML

Papers citing "On the Efficacy of Knowledge Distillation"

50 / 113 papers shown
Title
Disentangle and Remerge: Interventional Knowledge Distillation for
  Few-Shot Object Detection from A Conditional Causal Perspective
Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective
Jiangmeng Li
Yanan Zhang
Wenwen Qiang
Hui Xiong
Chengbo Jiao
Xiaohui Hu
Changwen Zheng
Gang Hua
CML
38
28
0
26 Aug 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Masked Autoencoders Enable Efficient Knowledge Distillers
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
32
39
0
25 Aug 2022
Effectiveness of Function Matching in Driving Scene Recognition
Effectiveness of Function Matching in Driving Scene Recognition
Shingo Yashima
26
1
0
20 Aug 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge
  for Human Motion Prediction
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
29
7
0
02 Aug 2022
Teachers in concordance for pseudo-labeling of 3D sequential data
Teachers in concordance for pseudo-labeling of 3D sequential data
Awet Haileslassie Gebrehiwot
Patrik Vacek
David Hurych
Karel Zimmermann
P. Pérez
Tomáš Svoboda
3DPC
19
6
0
13 Jul 2022
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised
  Memory-efficient Medical Image Segmentation
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image Segmentation
Ziyuan Zhao
An Zhu
Zeng Zeng
B. Veeravalli
Cuntai Guan
27
9
0
05 Jul 2022
Informed Learning by Wide Neural Networks: Convergence, Generalization
  and Sampling Complexity
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
32
3
0
02 Jul 2022
Crowd Localization from Gaussian Mixture Scoped Knowledge and Scoped
  Teacher
Crowd Localization from Gaussian Mixture Scoped Knowledge and Scoped Teacher
Juncheng Wang
Junyuan Gao
Yuan. Yuan
Qi. Wang
38
17
0
12 Jun 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
37
46
0
28 May 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian Sun
Weiming Hu
ViT
71
41
0
28 May 2022
Knowledge Distillation from A Stronger Teacher
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
30
237
0
21 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
31
8
0
13 May 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Masked Generative Distillation
Masked Generative Distillation
Zhendong Yang
Zhe Li
Mingqi Shao
Dachuan Shi
Zehuan Yuan
Chun Yuan
FedML
38
168
0
03 May 2022
Investigating Top-$k$ White-Box and Transferable Black-box Attack
Investigating Top-kkk White-Box and Transferable Black-box Attack
Chaoning Zhang
Philipp Benz
Adil Karjauv
Jae-Won Cho
Kang Zhang
In So Kweon
31
42
0
30 Mar 2022
PCA-Based Knowledge Distillation Towards Lightweight and Content-Style
  Balanced Photorealistic Style Transfer Models
PCA-Based Knowledge Distillation Towards Lightweight and Content-Style Balanced Photorealistic Style Transfer Models
Tai-Yin Chiu
Danna Gurari
23
19
0
25 Mar 2022
Better Supervisory Signals by Observing Learning Paths
Better Supervisory Signals by Observing Learning Paths
Yi Ren
Shangmin Guo
Danica J. Sutherland
33
21
0
04 Mar 2022
Meta Knowledge Distillation
Meta Knowledge Distillation
Jihao Liu
Boxiao Liu
Hongsheng Li
Yu Liu
18
25
0
16 Feb 2022
Exploring Inter-Channel Correlation for Diversity-preserved
  KnowledgeDistillation
Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation
Li Liu
Qingle Huang
Sihao Lin
Hongwei Xie
Bing Wang
Xiaojun Chang
Xiao-Xue Liang
28
100
0
08 Feb 2022
Adaptive Mixing of Auxiliary Losses in Supervised Learning
Adaptive Mixing of Auxiliary Losses in Supervised Learning
D. Sivasubramanian
Ayush Maheshwari
Pradeep Shenoy
A. Prathosh
Ganesh Ramakrishnan
29
5
0
07 Feb 2022
Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
Dingwen Zhang
Guohai Huang
Qiang Zhang
Jungong Han
Junwei Han
Yizhou Yu
25
217
0
07 Jan 2022
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training
  for Language Understanding and Generation
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Shuohuan Wang
Yu Sun
Yang Xiang
Zhihua Wu
Siyu Ding
...
Tian Wu
Wei Zeng
Ge Li
Wen Gao
Haifeng Wang
ELM
39
79
0
23 Dec 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
29
2
0
29 Nov 2021
Meta-Teacher For Face Anti-Spoofing
Meta-Teacher For Face Anti-Spoofing
Yunxiao Qin
Zitong Yu
Longbin Yan
Zezheng Wang
Chenxu Zhao
Zhen Lei
CVBM
25
61
0
12 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge
  Distillation of CTC Models
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
36
1
0
05 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
30
1
0
01 Nov 2021
Rethinking the Knowledge Distillation From the Perspective of Model
  Calibration
Rethinking the Knowledge Distillation From the Perspective of Model Calibration
Lehan Yang
Jincen Song
14
2
0
31 Oct 2021
Diversity Matters When Learning From Ensembles
Diversity Matters When Learning From Ensembles
G. Nam
Jongmin Yoon
Yoonho Lee
Juho Lee
UQCV
FedML
VLM
43
36
0
27 Oct 2021
Pro-KD: Progressive Distillation by Following the Footsteps of the
  Teacher
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Mehdi Rezagholizadeh
A. Jafari
Puneeth Salad
Pranav Sharma
Ali Saheb Pasand
A. Ghodsi
81
18
0
16 Oct 2021
Prune Your Model Before Distill It
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
Partial to Whole Knowledge Distillation: Progressive Distilling
  Decomposed Knowledge Boosts Student Better
Partial to Whole Knowledge Distillation: Progressive Distilling Decomposed Knowledge Boosts Student Better
Xuanyang Zhang
Xinming Zhang
Jian Sun
25
1
0
26 Sep 2021
FedZKT: Zero-Shot Knowledge Transfer towards Resource-Constrained
  Federated Learning with Heterogeneous On-Device Models
FedZKT: Zero-Shot Knowledge Transfer towards Resource-Constrained Federated Learning with Heterogeneous On-Device Models
Lan Zhang
Dapeng Wu
Xiaoyong Yuan
FedML
38
47
0
08 Sep 2021
SAGE: A Split-Architecture Methodology for Efficient End-to-End
  Autonomous Vehicle Control
SAGE: A Split-Architecture Methodology for Efficient End-to-End Autonomous Vehicle Control
Arnav V. Malawade
Mohanad Odema
Sebastien Lajeunesse-DeGroot
M. A. Al Faruque
28
20
0
22 Jul 2021
Isotonic Data Augmentation for Knowledge Distillation
Isotonic Data Augmentation for Knowledge Distillation
Wanyun Cui
Sen Yan
29
7
0
03 Jul 2021
Co-advise: Cross Inductive Bias Distillation
Co-advise: Cross Inductive Bias Distillation
Sucheng Ren
Zhengqi Gao
Tianyu Hua
Zihui Xue
Yonglong Tian
Shengfeng He
Hang Zhao
49
53
0
23 Jun 2021
Knowledge distillation: A good teacher is patient and consistent
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
50
287
0
09 Jun 2021
Towards Compact Single Image Super-Resolution via Contrastive
  Self-distillation
Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
Yanbo Wang
Shaohui Lin
Yanyun Qu
Haiyan Wu
Zhizhong Zhang
Yuan Xie
Angela Yao
SupR
28
53
0
25 May 2021
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural
  Networks
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks
Sayeed Shafayet Chowdhury
Isha Garg
Kaushik Roy
21
38
0
26 Apr 2021
Knowledge Distillation as Semiparametric Inference
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
40
31
0
20 Apr 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
27
47
0
26 Mar 2021
DeepReDuce: ReLU Reduction for Fast Private Inference
DeepReDuce: ReLU Reduction for Fast Private Inference
N. Jha
Zahra Ghodsi
S. Garg
Brandon Reagen
47
90
0
02 Mar 2021
There is More than Meets the Eye: Self-Supervised Multi-Object Detection
  and Tracking with Sound by Distilling Multimodal Knowledge
There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge
Francisco Rivera Valverde
Juana Valeria Hurtado
Abhinav Valada
26
72
0
01 Mar 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
245
190
0
12 Jan 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Robustness of Accuracy Metric and its Inspirations in Learning with
  Noisy Labels
Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels
Pengfei Chen
Junjie Ye
Guangyong Chen
Jingwei Zhao
Pheng-Ann Heng
NoLa
103
34
0
08 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data
  Efficiency and Imperfect Teacher
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Guangda Ji
Zhanxing Zhu
59
42
0
20 Oct 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
25
111
0
18 Sep 2020
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
18
40
0
04 Aug 2020
Previous
123
Next