Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1412.6550
Cited By
FitNets: Hints for Thin Deep Nets
19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FitNets: Hints for Thin Deep Nets"
50 / 667 papers shown
Title
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
Multi-view Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang
Zhulin An
Yongjun Xu
22
23
0
07 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Class-Incremental Learning for Semantic Segmentation Re-Using Neither Old Data Nor Old Labels
Marvin Klingner
Andreas Bär
Philipp Donn
Tim Fingscheidt
CLL
22
45
0
12 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
27
119
0
08 May 2020
Structure-Level Knowledge Distillation For Multilingual Sequence Labeling
Xinyu Wang
Yong-jia Jiang
Nguyen Bach
Tao Wang
Fei Huang
Kewei Tu
28
36
0
08 Apr 2020
FastBERT: a Self-distilling BERT with Adaptive Inference Time
Weijie Liu
Peng Zhou
Zhe Zhao
Zhiruo Wang
Haotang Deng
Qi Ju
57
354
0
05 Apr 2020
Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Juan-Manuel Perez-Rua
Brais Martínez
Xiatian Zhu
Antoine Toisoul
Victor Escorcia
Tao Xiang
48
19
0
02 Apr 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
29
274
0
31 Mar 2020
Introducing Pose Consistency and Warp-Alignment for Self-Supervised 6D Object Pose Estimation in Color Images
Juil Sock
Guillermo Garcia-Hernando
Anil Armagan
Tae-Kyun Kim
24
5
0
27 Mar 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
24
62
0
25 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
15
92
0
24 Mar 2020
Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives
Duo Li
Qifeng Chen
153
19
0
24 Mar 2020
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
164
226
0
23 Mar 2020
Efficient Crowd Counting via Structured Knowledge Transfer
Lingbo Liu
Jiaqi Chen
Hefeng Wu
Tianshui Chen
Guanbin Li
Liang Lin
29
64
0
23 Mar 2020
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
19
98
0
10 Mar 2020
Distilling portable Generative Adversarial Networks for Image Translation
Hanting Chen
Yunhe Wang
Han Shu
Changyuan Wen
Chunjing Xu
Boxin Shi
Chao Xu
Chang Xu
83
83
0
07 Mar 2020
Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs
Sangwoo Mo
Minsu Cho
Jinwoo Shin
30
212
0
25 Feb 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
47
1,203
0
25 Feb 2020
Residual Knowledge Distillation
Mengya Gao
Yujun Shen
Quanquan Li
Chen Change Loy
19
28
0
21 Feb 2020
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
Yeming Wen
Dustin Tran
Jimmy Ba
OOD
FedML
UQCV
32
482
0
17 Feb 2020
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
33
226
0
13 Feb 2020
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
28
33
0
10 Feb 2020
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
27
129
0
10 Feb 2020
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
25
128
0
05 Feb 2020
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms
G. Cerutti
Rahul Prasad
A. Brutti
Elisabetta Farella
21
47
0
29 Jan 2020
Lightweight 3D Human Pose Estimation Network Training Using Teacher-Student Learning
D. Hwang
Suntae Kim
Nicolas Monet
Hideki Koike
Soonmin Bae
3DH
25
39
0
15 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
34
47
0
07 Jan 2020
Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification
Liuyu Xiang
Guiguang Ding
Jungong Han
28
279
0
06 Jan 2020
ZeroQ: A Novel Zero Shot Quantization Framework
Yaohui Cai
Z. Yao
Zhen Dong
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
38
389
0
01 Jan 2020
AdderNet: Do We Really Need Multiplications in Deep Learning?
Hanting Chen
Yunhe Wang
Chunjing Xu
Boxin Shi
Chao Xu
Qi Tian
Chang Xu
29
194
0
31 Dec 2019
SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning
Albert Zhao
Tong He
Yitao Liang
Haibin Huang
Mathias Niepert
Stefano Soatto
17
16
0
06 Dec 2019
Blockwisely Supervised Neural Architecture Search with Knowledge Distillation
Changlin Li
Jiefeng Peng
Liuchun Yuan
Guangrun Wang
Xiaodan Liang
Liang Lin
Xiaojun Chang
31
179
0
29 Nov 2019
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
32
43
0
29 Nov 2019
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
21
63
0
28 Nov 2019
GhostNet: More Features from Cheap Operations
Kai Han
Yunhe Wang
Qi Tian
Jianyuan Guo
Chunjing Xu
Chang Xu
20
2,582
0
27 Nov 2019
Towards Making Deep Transfer Learning Never Hurt
Ruosi Wan
Haoyi Xiong
Xingjian Li
Zhanxing Zhu
Jun Huan
30
21
0
18 Nov 2019
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
25
67
0
18 Nov 2019
Label-similarity Curriculum Learning
Ürün Dogan
A. Deshmukh
Marcin Machura
Christian Igel
23
21
0
15 Nov 2019
Collaborative Distillation for Top-N Recommendation
Jae-woong Lee
Minjin Choi
Jongwuk Lee
Hyunjung Shim
19
47
0
13 Nov 2019
Iteratively Training Look-Up Tables for Network Quantization
Fabien Cardinaux
Stefan Uhlich
K. Yoshiyama
Javier Alonso García
Lukas Mauch
Stephen Tiedemann
Thomas Kemp
Akira Nakamura
MQ
27
16
0
12 Nov 2019
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Chunfeng Cui
Kaiqi Zhang
Talgat Daulbaev
Julia Gusak
Ivan Oseledets
Zheng-Wei Zhang
AAML
29
25
0
29 Oct 2019
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
47
1,031
0
23 Oct 2019
4-Connected Shift Residual Networks
Andrew Brown
Pascal Mettes
M. Worring
3DPC
28
8
0
22 Oct 2019
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition
Mengjia Yan
Mengao Zhao
Zining Xu
Qian Zhang
Guoli Wang
Zhizhong Su
CVBM
24
91
0
11 Oct 2019
DiabDeep: Pervasive Diabetes Diagnosis based on Wearable Medical Sensors and Efficient Neural Networks
Hongxu Yin
Bilal Mukadam
Xiaoliang Dai
N. Jha
34
47
0
11 Oct 2019
Distilling BERT into Simple Neural Networks with Unlabeled Transfer Data
Subhabrata Mukherjee
Ahmed Hassan Awadallah
18
25
0
04 Oct 2019
Deep Model Transferability from Attribution Maps
Mingli Song
Yixin Chen
Xinchao Wang
Chengchao Shen
Xiuming Zhang
27
54
0
26 Sep 2019
Compact Trilinear Interaction for Visual Question Answering
Tuong Khanh Long Do
Thanh-Toan Do
Huy Tran
Erman Tjiputra
Quang-Dieu Tran
36
59
0
26 Sep 2019
Revisiting Knowledge Distillation via Label Smoothing Regularization
Li-xin Yuan
Francis E. H. Tay
Guilin Li
Tao Wang
Jiashi Feng
25
90
0
25 Sep 2019
Previous
1
2
3
...
10
11
12
13
14
9
Next