Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.06597
Cited By
Learning Student Networks via Feature Embedding
17 December 2018
Hanting Chen
Yunhe Wang
Chang Xu
Chao Xu
Dacheng Tao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning Student Networks via Feature Embedding"
17 / 17 papers shown
Title
Exploring Feature-based Knowledge Distillation for Recommender System: A Frequency Perspective
Zhangchi Zhu
Wei Zhang
43
0
0
16 Nov 2024
Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
Jixuan Leng
Yijiang Li
Haohan Wang
VLM
31
0
0
26 Nov 2023
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Baoyuan Wu
Chun Yuan
Dacheng Tao
47
7
0
28 May 2023
Graph-based Knowledge Distillation: A survey and experimental evaluation
Jing Liu
Tongya Zheng
Guanzheng Zhang
Qinfen Hao
29
8
0
27 Feb 2023
Deep Semantic Statistics Matching (D2SM) Denoising Network
Kangfu Mei
Vishal M. Patel
Rui Huang
DiffM
13
8
0
19 Jul 2022
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
17
235
0
21 May 2022
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
24
36
0
10 Mar 2022
Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation
Zhiwei Hao
Jianyuan Guo
Ding Jia
Kai Han
Yehui Tang
Chao Zhang
Dacheng Tao
Yunhe Wang
ViT
33
68
0
03 Jul 2021
Privileged Graph Distillation for Cold Start Recommendation
Shuai Wang
Kun Zhang
Le Wu
Haiping Ma
Richang Hong
Meng Wang
10
28
0
31 May 2021
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
24
199
0
26 Mar 2021
Kernel Based Progressive Distillation for Adder Neural Networks
Yixing Xu
Chang Xu
Xinghao Chen
Wei Zhang
Chunjing Xu
Yunhe Wang
30
47
0
28 Sep 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
23
67
0
18 Nov 2019
ReNAS:Relativistic Evaluation of Neural Architecture Search
Yixing Xu
Yunhe Wang
Avishkar Bhoopchand
Christopher Mattern
A. Grabska-Barwinska
Chunjing Xu
Chang Xu
20
81
0
30 Sep 2019
Robust Student Network Learning
Tianyu Guo
Chang Xu
Shiyi He
Boxin Shi
Chao Xu
Dacheng Tao
OOD
32
30
0
30 Jul 2018
1