ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01348
  4. Cited By
On the Efficacy of Knowledge Distillation

On the Efficacy of Knowledge Distillation

3 October 2019
Ligang He
Rui Mao
ArXivPDFHTML

Papers citing "On the Efficacy of Knowledge Distillation"

13 / 113 papers shown
Title
Learning with Privileged Information for Efficient Image
  Super-Resolution
Learning with Privileged Information for Efficient Image Super-Resolution
Wonkyung Lee
Junghyup Lee
Dohyung Kim
Bumsub Ham
33
134
0
15 Jul 2020
Tracking-by-Trackers with a Distilled and Reinforced Model
Tracking-by-Trackers with a Distilled and Reinforced Model
Matteo Dunnhofer
N. Martinel
C. Micheloni
VOT
OffRL
27
4
0
08 Jul 2020
Fast, Accurate, and Simple Models for Tabular Data via Augmented
  Distillation
Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation
Rasool Fakoor
Jonas W. Mueller
Nick Erickson
Pratik Chaudhari
Alex Smola
26
54
0
25 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge
  Distillation
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
26
18
0
06 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
15
92
0
24 Mar 2020
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly
  Convolutional Neural Network
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network
Wonchul Son
Youngbin Kim
Wonseok Song
Youngsuk Moon
Wonjun Hwang
9
0
0
09 Mar 2020
Knowledge Transfer Graph for Deep Collaborative Learning
Knowledge Transfer Graph for Deep Collaborative Learning
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
30
9
0
10 Sep 2019
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
209
474
0
12 Jun 2018
Previous
123