ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6550
  4. Cited By
FitNets: Hints for Thin Deep Nets

FitNets: Hints for Thin Deep Nets

19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
    FedML
ArXivPDFHTML

Papers citing "FitNets: Hints for Thin Deep Nets"

50 / 667 papers shown
Title
Incremental Cross-view Mutual Distillation for Self-supervised Medical
  CT Synthesis
Incremental Cross-view Mutual Distillation for Self-supervised Medical CT Synthesis
Chaowei Fang
Liang Wang
Dingwen Zhang
Jun Xu
Yixuan Yuan
Junwei Han
OOD
32
13
0
20 Dec 2021
A Deep Knowledge Distillation framework for EEG assisted enhancement of
  single-lead ECG based sleep staging
A Deep Knowledge Distillation framework for EEG assisted enhancement of single-lead ECG based sleep staging
Vaibhav Joshi
S. Vijayarangan
S. Preejith
M. Sivaprakasam
24
7
0
14 Dec 2021
Knowledge Distillation for Object Detection via Rank Mimicking and
  Prediction-guided Feature Imitation
Knowledge Distillation for Object Detection via Rank Mimicking and Prediction-guided Feature Imitation
Gang Li
Xiang Li
Yujie Wang
Shanshan Zhang
Yichao Wu
Ding Liang
ObjD
24
79
0
09 Dec 2021
ADD: Frequency Attention and Multi-View based Knowledge Distillation to
  Detect Low-Quality Compressed Deepfake Images
ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images
B. Le
Simon S. Woo
AAML
27
80
0
07 Dec 2021
Toward Practical Monocular Indoor Depth Estimation
Toward Practical Monocular Indoor Depth Estimation
Cho-Ying Wu
Jialiang Wang
Michael Hall
Ulrich Neumann
Shuochen Su
3DV
MDE
43
62
0
04 Dec 2021
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Information Theoretic Representation Distillation
Information Theoretic Representation Distillation
Roy Miles
Adrian Lopez-Rodriguez
K. Mikolajczyk
MQ
13
21
0
01 Dec 2021
Mixed Precision Low-bit Quantization of Neural Network Language Models
  for Speech Recognition
Mixed Precision Low-bit Quantization of Neural Network Language Models for Speech Recognition
Junhao Xu
Jianwei Yu
Shoukang Hu
Xunying Liu
Helen Meng
MQ
27
13
0
29 Nov 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
29
2
0
29 Nov 2021
ESGN: Efficient Stereo Geometry Network for Fast 3D Object Detection
ESGN: Efficient Stereo Geometry Network for Fast 3D Object Detection
Aqi Gao
Yanwei Pang
Jing Nie
Jiale Cao
Yishun Guo
3DPC
17
15
0
28 Nov 2021
Self-slimmed Vision Transformer
Self-slimmed Vision Transformer
Zhuofan Zong
Kunchang Li
Guanglu Song
Yali Wang
Yu Qiao
B. Leng
Yu Liu
ViT
21
30
0
24 Nov 2021
EvDistill: Asynchronous Events to End-task Learning via Bidirectional
  Reconstruction-guided Cross-modal Knowledge Distillation
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation
Lin Wang
Yujeong Chae
Sung-Hoon Yoon
Tae-Kyun Kim
Kuk-Jin Yoon
42
64
0
24 Nov 2021
Local-Selective Feature Distillation for Single Image Super-Resolution
Local-Selective Feature Distillation for Single Image Super-Resolution
Seonguk Park
Nojun Kwak
24
9
0
22 Nov 2021
Hierarchical Knowledge Distillation for Dialogue Sequence Labeling
Hierarchical Knowledge Distillation for Dialogue Sequence Labeling
Shota Orihashi
Yoshihiro Yamazaki
Naoki Makishima
Mana Ihori
Akihiko Takashima
Tomohiro Tanaka
Ryo Masumura
17
0
0
22 Nov 2021
Robust and Accurate Object Detection via Self-Knowledge Distillation
Robust and Accurate Object Detection via Self-Knowledge Distillation
Weipeng Xu
Pengzhi Chu
Renhao Xie
Xiongziyan Xiao
Hongcheng Huang
AAML
ObjD
27
4
0
14 Nov 2021
Learning Data Teaching Strategies Via Knowledge Tracing
Learning Data Teaching Strategies Via Knowledge Tracing
Ghodai M. Abdelrahman
Qing Wang
24
12
0
13 Nov 2021
Facial Landmark Points Detection Using Knowledge Distillation-Based
  Neural Networks
Facial Landmark Points Detection Using Knowledge Distillation-Based Neural Networks
A. P. Fard
Mohammad H. Mahoor
CVBM
31
28
0
13 Nov 2021
Meta-Teacher For Face Anti-Spoofing
Meta-Teacher For Face Anti-Spoofing
Yunxiao Qin
Zitong Yu
Longbin Yan
Zezheng Wang
Chenxu Zhao
Zhen Lei
CVBM
25
61
0
12 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
43
17
0
09 Nov 2021
Cold Brew: Distilling Graph Node Representations with Incomplete or
  Missing Neighborhoods
Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods
Wenqing Zheng
Edward W. Huang
Nikhil S. Rao
S. Katariya
Zhangyang Wang
Karthik Subbian
32
62
0
08 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge
  Distillation of CTC Models
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
36
1
0
05 Nov 2021
Multi-Glimpse Network: A Robust and Efficient Classification
  Architecture based on Recurrent Downsampled Attention
Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention
S. Tan
Runpei Dong
Kaisheng Ma
22
2
0
03 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
30
1
0
01 Nov 2021
Learning Distilled Collaboration Graph for Multi-Agent Perception
Learning Distilled Collaboration Graph for Multi-Agent Perception
Yiming Li
Shunli Ren
Pengxiang Wu
Siheng Chen
Chen Feng
Wenjun Zhang
27
237
0
01 Nov 2021
Revisiting Discriminator in GAN Compression: A Generator-discriminator
  Cooperative Compression Scheme
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme
Shaojie Li
Jie Wu
Xuefeng Xiao
Rongrong Ji
Xudong Mao
Rongrong Ji
23
35
0
27 Oct 2021
Reconstructing Pruned Filters using Cheap Spatial Transformations
Reconstructing Pruned Filters using Cheap Spatial Transformations
Roy Miles
K. Mikolajczyk
26
0
0
25 Oct 2021
Instance-Conditional Knowledge Distillation for Object Detection
Instance-Conditional Knowledge Distillation for Object Detection
Zijian Kang
Peizhen Zhang
Xinming Zhang
Jian Sun
N. Zheng
27
76
0
25 Oct 2021
MUSE: Feature Self-Distillation with Mutual Information and
  Self-Information
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
30
2
0
25 Oct 2021
Pixel-by-Pixel Cross-Domain Alignment for Few-Shot Semantic Segmentation
Pixel-by-Pixel Cross-Domain Alignment for Few-Shot Semantic Segmentation
A. Tavera
Fabio Cermelli
Carlo Masone
Barbara Caputo
29
19
0
22 Oct 2021
Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For
  Model Compression
Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression
Usma Niyaz
Deepti R. Bathula
20
8
0
21 Oct 2021
Class-Discriminative CNN Compression
Class-Discriminative CNN Compression
Yuchen Liu
D. Wentzlaff
S. Kung
26
1
0
21 Oct 2021
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for
  Efficient Distillation
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
26
4
0
19 Oct 2021
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
  Neural Networks
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks
Yikai Wang
Yi Yang
Gang Hua
Anbang Yao
MQ
29
15
0
18 Oct 2021
Object DGCNN: 3D Object Detection using Dynamic Graphs
Object DGCNN: 3D Object Detection using Dynamic Graphs
Yue Wang
Justin Solomon
3DPC
157
104
0
13 Oct 2021
Towards Mixed-Precision Quantization of Neural Networks via Constrained
  Optimization
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization
Weihan Chen
Peisong Wang
Jian Cheng
MQ
42
62
0
13 Oct 2021
Towards Streaming Egocentric Action Anticipation
Towards Streaming Egocentric Action Anticipation
Antonino Furnari
G. Farinella
EgoV
33
6
0
11 Oct 2021
KNOT: Knowledge Distillation using Optimal Transport for Solving NLP
  Tasks
KNOT: Knowledge Distillation using Optimal Transport for Solving NLP Tasks
Rishabh Bhardwaj
Tushar Vaidya
Soujanya Poria
OT
FedML
65
7
0
06 Oct 2021
Multilingual AMR Parsing with Noisy Knowledge Distillation
Multilingual AMR Parsing with Noisy Knowledge Distillation
Deng Cai
Xin Li
Jackie Chun-Sing Ho
Lidong Bing
W. Lam
27
18
0
30 Sep 2021
Towards Efficient Post-training Quantization of Pre-trained Language
  Models
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
M. Lyu
MQ
82
47
0
30 Sep 2021
Prune Your Model Before Distill It
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
Deep Structured Instance Graph for Distilling Object Detectors
Deep Structured Instance Graph for Distilling Object Detectors
Yixin Chen
Pengguang Chen
Shu Liu
Liwei Wang
Jiaya Jia
ObjD
ISeg
21
12
0
27 Sep 2021
Partial to Whole Knowledge Distillation: Progressive Distilling
  Decomposed Knowledge Boosts Student Better
Partial to Whole Knowledge Distillation: Progressive Distilling Decomposed Knowledge Boosts Student Better
Xuanyang Zhang
Xinming Zhang
Jian Sun
25
1
0
26 Sep 2021
Weakly-Supervised Monocular Depth Estimationwith Resolution-Mismatched
  Data
Weakly-Supervised Monocular Depth Estimationwith Resolution-Mismatched Data
Jialei Xu
Yuanchao Bai
Xianming Liu
Junjun Jiang
Xiangyang Ji
MDE
41
5
0
23 Sep 2021
Dynamic Knowledge Distillation for Pre-trained Language Models
Dynamic Knowledge Distillation for Pre-trained Language Models
Lei Li
Yankai Lin
Shuhuai Ren
Peng Li
Jie Zhou
Xu Sun
25
49
0
23 Sep 2021
A Studious Approach to Semi-Supervised Learning
A Studious Approach to Semi-Supervised Learning
Sahil Khose
Shruti Jain
V. Manushree
18
0
0
18 Sep 2021
New Perspective on Progressive GANs Distillation for One-class Novelty Detection
Zhiwei Zhang
Yu Dong
Hanyu Peng
Shifeng Chen
29
0
0
15 Sep 2021
On the Efficiency of Subclass Knowledge Distillation in Classification
  Tasks
On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
A. Sajedi
Konstantinos N. Plataniotis
16
4
0
12 Sep 2021
Facial Anatomical Landmark Detection using Regularized Transfer Learning
  with Application to Fetal Alcohol Syndrome Recognition
Facial Anatomical Landmark Detection using Regularized Transfer Learning with Application to Fetal Alcohol Syndrome Recognition
Zeyu Fu
Jianbo Jiao
M. Suttie
J. A. Noble
CVBM
17
9
0
12 Sep 2021
Dual Correction Strategy for Ranking Distillation in Top-N Recommender
  System
Dual Correction Strategy for Ranking Distillation in Top-N Recommender System
Youngjune Lee
Kee-Eung Kim
22
19
0
08 Sep 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented
  Distribution
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
22
15
0
07 Sep 2021
Previous
123...678...121314
Next