Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1412.6550
Cited By
FitNets: Hints for Thin Deep Nets
19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FitNets: Hints for Thin Deep Nets"
50 / 667 papers shown
Title
AttTrack: Online Deep Attention Transfer for Multi-object Tracking
Keivan Nalaie
Rong Zheng
VOT
21
4
0
16 Oct 2022
Federated Learning with Privacy-Preserving Ensemble Attention Distillation
Xuan Gong
Liangchen Song
Rishi Vedula
Abhishek Sharma
Meng Zheng
...
Arun Innanje
Terrence Chen
Junsong Yuan
David Doermann
Ziyan Wu
FedML
23
27
0
16 Oct 2022
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Zhichun Guo
Chunhui Zhang
Yujie Fan
Yijun Tian
Chuxu Zhang
Nitesh V. Chawla
21
32
0
12 Oct 2022
Linkless Link Prediction via Relational Distillation
Zhichun Guo
William Shiao
Shichang Zhang
Yozen Liu
Nitesh V. Chawla
Neil Shah
Tong Zhao
24
41
0
11 Oct 2022
Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Charith Peris
Lizhen Tan
Thomas Gueudré
Turan Gojayev
Vivi Wei
Gokmen Oz
30
4
0
10 Oct 2022
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
Peng Ye
Shengji Tang
Baopu Li
Tao Chen
Wanli Ouyang
31
13
0
09 Oct 2022
Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition
Sungho Shin
Joosoon Lee
Junseok Lee
Yeonguk Yu
Kyoobin Lee
CVBM
24
32
0
29 Sep 2022
PROD: Progressive Distillation for Dense Retrieval
Zhenghao Lin
Yeyun Gong
Xiao Liu
Hang Zhang
Chen Lin
...
Jian Jiao
Jing Lu
Daxin Jiang
Rangan Majumder
Nan Duan
51
27
0
27 Sep 2022
MiNL: Micro-images based Neural Representation for Light Fields
Ziru Xu
Henan Wang
Zhibo Chen
33
1
0
17 Sep 2022
On-Device Domain Generalization
Kaiyang Zhou
Yuanhan Zhang
Yuhang Zang
Jingkang Yang
Chen Change Loy
Ziwei Liu
OOD
33
6
0
15 Sep 2022
Exploring Target Representations for Masked Autoencoders
Xingbin Liu
Jinghao Zhou
Tao Kong
Xianming Lin
Rongrong Ji
100
50
0
08 Sep 2022
Generative Adversarial Super-Resolution at the Edge with Knowledge Distillation
Simone Angarano
Francesco Salvetti
Mauro Martini
Marcello Chiaberge
GAN
47
21
0
07 Sep 2022
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition
Duc-Quang Vu
T. Phung
Jia-Ching Wang
27
9
0
03 Sep 2022
Membership Inference Attacks by Exploiting Loss Trajectory
Yiyong Liu
Zhengyu Zhao
Michael Backes
Yang Zhang
27
98
0
31 Aug 2022
Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective
Jiangmeng Li
Yanan Zhang
Jingyao Wang
Hui Xiong
Chengbo Jiao
Xiaohui Hu
Changwen Zheng
Gang Hua
CML
36
28
0
26 Aug 2022
CMD: Self-supervised 3D Action Representation Learning with Cross-modal Mutual Distillation
Yunyao Mao
Wen-gang Zhou
Zhenbo Lu
Jiajun Deng
Houqiang Li
30
38
0
26 Aug 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
32
39
0
25 Aug 2022
Multi-Granularity Distillation Scheme Towards Lightweight Semi-Supervised Semantic Segmentation
Jie Qin
Jie Wu
Ming Li
Xu Xiao
Min Zheng
Xingang Wang
38
6
0
22 Aug 2022
Rethinking Knowledge Distillation via Cross-Entropy
Zhendong Yang
Zhe Li
Yuan Gong
Tianke Zhang
Shanshan Lao
Chun Yuan
Yu Li
33
14
0
22 Aug 2022
Effectiveness of Function Matching in Driving Scene Recognition
Shingo Yashima
26
1
0
20 Aug 2022
Task-Balanced Distillation for Object Detection
Ruining Tang
Zhen-yu Liu
Yangguang Li
Yiguo Song
Hui Liu
Qide Wang
Jing Shao
Guifang Duan
Jianrong Tan
26
20
0
05 Aug 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
29
7
0
02 Aug 2022
Domain-invariant Feature Exploration for Domain Generalization
Wang Lu
Jindong Wang
Haoliang Li
Yiqiang Chen
Xing Xie
OOD
30
70
0
25 Jul 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
41
50
0
23 Jul 2022
Locality Guidance for Improving Vision Transformers on Tiny Datasets
Kehan Li
Runyi Yu
Zhennan Wang
Li-ming Yuan
Guoli Song
Jie Chen
ViT
32
44
0
20 Jul 2022
Multi-Level Branched Regularization for Federated Learning
Jinkyu Kim
Geeho Kim
Bohyung Han
FedML
22
53
0
14 Jul 2022
Distilled Non-Semantic Speech Embeddings with Binary Neural Networks for Low-Resource Devices
Harlin Lee
Aaqib Saeed
21
2
0
12 Jul 2022
Normalized Feature Distillation for Semantic Segmentation
Tao Liu
Xi Yang
Chenshu Chen
9
5
0
12 Jul 2022
Factorizing Knowledge in Neural Networks
Xingyi Yang
Jingwen Ye
Xinchao Wang
MoMe
47
121
0
04 Jul 2022
PrUE: Distilling Knowledge from Sparse Teacher Networks
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
19
2
0
03 Jul 2022
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
Yeonghyeon Lee
Kangwook Jang
Jahyun Goo
Youngmoon Jung
Hoi-Rim Kim
23
29
0
01 Jul 2022
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State
Xinshao Wang
Yang Hua
Elyor Kodirov
S. Mukherjee
David A. Clifton
N. Robertson
25
6
0
30 Jun 2022
Teach me how to Interpolate a Myriad of Embeddings
Shashanka Venkataramanan
Ewa Kijak
Laurent Amsaleg
Yannis Avrithis
46
2
0
29 Jun 2022
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs
Bo-Kyeong Kim
Shinkook Choi
Hancheol Park
21
4
0
29 Jun 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
27
10
0
29 Jun 2022
MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare
Yiqiang Chen
Wang Lu
Xin Qin
Jindong Wang
Xing Xie
FedML
41
58
0
17 Jun 2022
Multi scale Feature Extraction and Fusion for Online Knowledge Distillation
Panpan Zou
Yinglei Teng
Tao Niu
35
3
0
16 Jun 2022
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Yichen Liu
C. Wang
Defang Chen
Zhehui Zhou
Yan Feng
Chun-Yen Chen
19
0
0
07 Jun 2022
Evaluation-oriented Knowledge Distillation for Deep Face Recognition
Y. Huang
Jiaxiang Wu
Xingkun Xu
Shouhong Ding
CVBM
17
32
0
06 Jun 2022
Point-to-Voxel Knowledge Distillation for LiDAR Semantic Segmentation
Yuenan Hou
Xinge Zhu
Yuexin Ma
Chen Change Loy
Yikang Li
3DPC
41
157
0
05 Jun 2022
Towards Efficient 3D Object Detection with Knowledge Distillation
Jihan Yang
Shaoshuai Shi
Runyu Ding
Zhe Wang
Xiaojuan Qi
117
46
0
30 May 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
37
46
0
28 May 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian Sun
Weiming Hu
ViT
67
41
0
28 May 2022
Fast Object Placement Assessment
Li Niu
Qingyang Liu
Zhenchen Liu
Jiangtong Li
21
14
0
28 May 2022
Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks
Zhiwei Bai
Tao Luo
Z. Xu
Yaoyu Zhang
31
4
0
26 May 2022
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
Jie M. Zhang
Chen Chen
Lingjuan Lyu
55
14
0
23 May 2022
PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection
Linfeng Zhang
Runpei Dong
Hung-Shuo Tai
Kaisheng Ma
3DPC
72
47
0
23 May 2022
Knowledge Distillation via the Target-aware Transformer
Sihao Lin
Hongwei Xie
Bing Wang
Kaicheng Yu
Xiaojun Chang
Xiaodan Liang
G. Wang
ViT
20
104
0
22 May 2022
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
22
237
0
21 May 2022
[Re] Distilling Knowledge via Knowledge Review
Apoorva Verma
Pranjal Gulati
Sarthak Gupta
VLM
24
0
0
18 May 2022
Previous
1
2
3
4
5
6
...
12
13
14
Next