ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01866
  4. Cited By
A Comprehensive Overhaul of Feature Distillation

A Comprehensive Overhaul of Feature Distillation

3 April 2019
Byeongho Heo
Jeesoo Kim
Sangdoo Yun
Hyojin Park
Nojun Kwak
J. Choi
ArXivPDFHTML

Papers citing "A Comprehensive Overhaul of Feature Distillation"

48 / 98 papers shown
Title
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide
  Image Classification
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide Image Classification
Linhao Qu
Xiao-Zhuo Luo
Manning Wang
Zhijian Song
WSOD
26
57
0
07 Oct 2022
Generative Adversarial Super-Resolution at the Edge with Knowledge
  Distillation
Generative Adversarial Super-Resolution at the Edge with Knowledge Distillation
Simone Angarano
Francesco Salvetti
Mauro Martini
Marcello Chiaberge
GAN
33
21
0
07 Sep 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Masked Autoencoders Enable Efficient Knowledge Distillers
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
26
39
0
25 Aug 2022
Rethinking Knowledge Distillation via Cross-Entropy
Rethinking Knowledge Distillation via Cross-Entropy
Zhendong Yang
Zhe Li
Yuan Gong
Tianke Zhang
Shanshan Lao
Chun Yuan
Yu Li
25
14
0
22 Aug 2022
Lipschitz Continuity Retained Binary Neural Network
Lipschitz Continuity Retained Binary Neural Network
Yuzhang Shang
Dan Xu
Bin Duan
Ziliang Zong
Liqiang Nie
Yan Yan
16
19
0
13 Jul 2022
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised
  Memory-efficient Medical Image Segmentation
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image Segmentation
Ziyuan Zhao
An Zhu
Zeng Zeng
B. Veeravalli
Cuntai Guan
21
9
0
05 Jul 2022
Boosting Single-Frame 3D Object Detection by Simulating Multi-Frame
  Point Clouds
Boosting Single-Frame 3D Object Detection by Simulating Multi-Frame Point Clouds
Wu Zheng
Li Jiang
Fanbin Lu
Yangyang Ye
Chi-Wing Fu
3DPC
ObjD
32
9
0
03 Jul 2022
Boosting 3D Object Detection by Simulating Multimodality on Point Clouds
Boosting 3D Object Detection by Simulating Multimodality on Point Clouds
Wu Zheng
Ming-Hong Hong
Li Jiang
Chi-Wing Fu
3DPC
31
30
0
30 Jun 2022
Revisiting Label Smoothing and Knowledge Distillation Compatibility:
  What was Missing?
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Yunqing Zhao
Ngai-man Cheung
83
41
0
29 Jun 2022
PointDistiller: Structured Knowledge Distillation Towards Efficient and
  Compact 3D Detection
PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection
Linfeng Zhang
Runpei Dong
Hung-Shuo Tai
Kaisheng Ma
3DPC
72
47
0
23 May 2022
Knowledge Distillation via the Target-aware Transformer
Knowledge Distillation via the Target-aware Transformer
Sihao Lin
Hongwei Xie
Bing Wang
Kaicheng Yu
Xiaojun Chang
Xiaodan Liang
G. Wang
ViT
20
104
0
22 May 2022
Knowledge Distillation from A Stronger Teacher
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
17
235
0
21 May 2022
[Re] Distilling Knowledge via Knowledge Review
[Re] Distilling Knowledge via Knowledge Review
Apoorva Verma
Pranjal Gulati
Sarthak Gupta
VLM
16
0
0
18 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
31
7
0
13 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Masked Generative Distillation
Masked Generative Distillation
Zhendong Yang
Zhe Li
Mingqi Shao
Dachuan Shi
Zehuan Yuan
Chun Yuan
FedML
27
168
0
03 May 2022
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Chuanguang Yang
Helong Zhou
Zhulin An
Xue Jiang
Yong Xu
Qian Zhang
28
169
0
14 Apr 2022
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
  Detection
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection
Yi Wei
Zibu Wei
Yongming Rao
Jiaxin Li
Jie Zhou
Jiwen Lu
39
63
0
28 Mar 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence,
  Higher Data-efficiency, and Better Transferability
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
29
36
0
10 Mar 2022
Exploring Inter-Channel Correlation for Diversity-preserved
  KnowledgeDistillation
Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation
Li Liu
Qingle Huang
Sihao Lin
Hongwei Xie
Bing Wang
Xiaojun Chang
Xiao-Xue Liang
28
100
0
08 Feb 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional
  Reconstruction-guided Cross-modal Knowledge Distillation
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation
Lin Wang
Yujeong Chae
Sung-Hoon Yoon
Tae-Kyun Kim
Kuk-Jin Yoon
28
64
0
24 Nov 2021
Local-Selective Feature Distillation for Single Image Super-Resolution
Local-Selective Feature Distillation for Single Image Super-Resolution
Seonguk Park
Nojun Kwak
16
9
0
22 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
35
17
0
09 Nov 2021
PP-ShiTu: A Practical Lightweight Image Recognition System
PP-ShiTu: A Practical Lightweight Image Recognition System
Shengyun Wei
Ruoyu Guo
Cheng Cui
Bin Lu
Shuilong Dong
...
Xueying Lyu
Qiwen Liu
Xiaoguang Hu
Dianhai Yu
Yanjun Ma
CVBM
24
6
0
01 Nov 2021
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for
  Efficient Distillation
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
26
4
0
19 Oct 2021
Dual Transfer Learning for Event-based End-task Prediction via Pluggable
  Event to Image Translation
Dual Transfer Learning for Event-based End-task Prediction via Pluggable Event to Image Translation
Lin Wang
Yujeong Chae
Kuk-Jin Yoon
27
32
0
04 Sep 2021
LIGA-Stereo: Learning LiDAR Geometry Aware Representations for
  Stereo-based 3D Detector
LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based 3D Detector
Xiaoyang Guo
Shaoshuai Shi
Xiaogang Wang
Hongsheng Li
3DPC
25
106
0
18 Aug 2021
PQK: Model Compression via Pruning, Quantization, and Knowledge
  Distillation
PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
Jang-Hyun Kim
Simyung Chang
Nojun Kwak
22
44
0
25 Jun 2021
Fast Camera Image Denoising on Mobile GPUs with Deep Learning, Mobile AI
  2021 Challenge: Report
Fast Camera Image Denoising on Mobile GPUs with Deep Learning, Mobile AI 2021 Challenge: Report
Andrey D. Ignatov
Kim Byeoung-su
Radu Timofte
Angeline Pouget
Fenglong Song
...
Lei Lei
Chaoyu Feng
L. Huang
Z. Lei
Feifei Chen
19
30
0
17 May 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
149
420
0
19 Apr 2021
Distilling Object Detectors via Decoupled Features
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
35
199
0
26 Mar 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Cross-Layer Distillation with Semantic Calibration
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
45
286
0
06 Dec 2020
Federated Knowledge Distillation
Federated Knowledge Distillation
Hyowoon Seo
Jihong Park
Seungeun Oh
M. Bennis
Seong-Lyun Kim
FedML
28
90
0
04 Nov 2020
Kernel Based Progressive Distillation for Adder Neural Networks
Kernel Based Progressive Distillation for Adder Neural Networks
Yixing Xu
Chang Xu
Xinghao Chen
Wei Zhang
Chunjing Xu
Yunhe Wang
35
47
0
28 Sep 2020
Differentiable Feature Aggregation Search for Knowledge Distillation
Differentiable Feature Aggregation Search for Knowledge Distillation
Yushuo Guan
Pengyu Zhao
Bingxuan Wang
Yuanxing Zhang
Cong Yao
Kaigui Bian
Jian Tang
FedML
17
44
0
02 Aug 2020
Learning with Privileged Information for Efficient Image
  Super-Resolution
Learning with Privileged Information for Efficient Image Super-Resolution
Wonkyung Lee
Junghyup Lee
Dohyung Kim
Bumsub Ham
33
134
0
15 Jul 2020
Unsupervised Multi-Target Domain Adaptation Through Knowledge
  Distillation
Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation
Le Thanh Nguyen-Meidine
Atif Bela
M. Kiran
Jose Dolz
Louis-Antoine Blais-Morin
Eric Granger
32
81
0
14 Jul 2020
Dynamic Group Convolution for Accelerating Convolutional Neural Networks
Dynamic Group Convolution for Accelerating Convolutional Neural Networks
Z. Su
Linpu Fang
Wenxiong Kang
D. Hu
M. Pietikäinen
Li Liu
13
44
0
08 Jul 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,837
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
12
115
0
09 Jun 2020
Multi-view Contrastive Learning for Online Knowledge Distillation
Multi-view Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang
Zhulin An
Yongjun Xu
9
23
0
07 Jun 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
21
48
0
31 Mar 2020
QKD: Quantization-aware Knowledge Distillation
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
16
63
0
28 Nov 2019
Search to Distill: Pearls are Everywhere but not the Eyes
Search to Distill: Pearls are Everywhere but not the Eyes
Yu Liu
Xuhui Jia
Mingxing Tan
Raviteja Vemulapalli
Yukun Zhu
Bradley Green
Xiaogang Wang
22
67
0
20 Nov 2019
Preparing Lessons: Improve Knowledge Distillation with Better
  Supervision
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
25
67
0
18 Nov 2019
Knowledge Transfer Graph for Deep Collaborative Learning
Knowledge Transfer Graph for Deep Collaborative Learning
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
20
9
0
10 Sep 2019
Previous
12