ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6550
  4. Cited By
FitNets: Hints for Thin Deep Nets

FitNets: Hints for Thin Deep Nets

19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
    FedML
ArXivPDFHTML

Papers citing "FitNets: Hints for Thin Deep Nets"

50 / 667 papers shown
Title
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
31
8
0
13 May 2022
Contrastive Supervised Distillation for Continual Representation
  Learning
Contrastive Supervised Distillation for Continual Representation Learning
Tommaso Barletti
Niccoló Biondi
F. Pernici
Matteo Bruni
A. Bimbo
CLL
26
7
0
11 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Masked Generative Distillation
Masked Generative Distillation
Zhendong Yang
Zhe Li
Mingqi Shao
Dachuan Shi
Zehuan Yuan
Chun Yuan
FedML
38
168
0
03 May 2022
Multiple Degradation and Reconstruction Network for Single Image
  Denoising via Knowledge Distillation
Multiple Degradation and Reconstruction Network for Single Image Denoising via Knowledge Distillation
Juncheng Li
Hanhui Yang
Qiaosi Yi
Faming Fang
Guangwei Gao
T. Zeng
Guixu Zhang
24
7
0
29 Apr 2022
A Closer Look at Branch Classifiers of Multi-exit Architectures
A Closer Look at Branch Classifiers of Multi-exit Architectures
Shaohui Lin
Bo Ji
Rongrong Ji
Angela Yao
12
4
0
28 Apr 2022
HRPose: Real-Time High-Resolution 6D Pose Estimation Network Using
  Knowledge Distillation
HRPose: Real-Time High-Resolution 6D Pose Estimation Network Using Knowledge Distillation
Qingze Guan
Zihao Sheng
Shibei Xue
3DH
19
15
0
20 Apr 2022
Empirical Evaluation and Theoretical Analysis for Representation
  Learning: A Survey
Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey
Kento Nozawa
Issei Sato
AI4TS
24
4
0
18 Apr 2022
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
  Adaptation
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Simiao Zuo
Qingru Zhang
Chen Liang
Pengcheng He
T. Zhao
Weizhu Chen
MoE
24
38
0
15 Apr 2022
2D Human Pose Estimation: A Survey
2D Human Pose Estimation: A Survey
Haoming Chen
Runyang Feng
Sifan Wu
Hao Xu
F. Zhou
Zhenguang Liu
3DH
25
55
0
15 Apr 2022
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Chuanguang Yang
Helong Zhou
Zhulin An
Xue Jiang
Yong Xu
Qian Zhang
37
169
0
14 Apr 2022
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly
  Supervised Object Detection
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly Supervised Object Detection
Ze Chen
Zhihang Fu
Jianqiang Huang
Mingyuan Tao
Rongxin Jiang
Xiang Tian
Yao-wu Chen
Xiansheng Hua
WSOD
22
4
0
14 Apr 2022
Localization Distillation for Object Detection
Localization Distillation for Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
Jun Wang
W. Zuo
Ming-Ming Cheng
27
64
0
12 Apr 2022
CoupleFace: Relation Matters for Face Recognition Distillation
CoupleFace: Relation Matters for Face Recognition Distillation
Jiaheng Liu
Haoyu Qin
Yichao Wu
Jinyang Guo
Ding Liang
Ke Xu
CVBM
21
19
0
12 Apr 2022
Robust Cross-Modal Representation Learning with Progressive
  Self-Distillation
Robust Cross-Modal Representation Learning with Progressive Self-Distillation
A. Andonian
Shixing Chen
Raffay Hamid
VLM
29
54
0
10 Apr 2022
CD$^2$-pFed: Cyclic Distillation-guided Channel Decoupling for Model
  Personalization in Federated Learning
CD2^22-pFed: Cyclic Distillation-guided Channel Decoupling for Model Personalization in Federated Learning
Yiqing Shen
Yuyin Zhou
Lequan Yu
OOD
24
55
0
08 Apr 2022
Universal Representations: A Unified Look at Multiple Task and Domain
  Learning
Universal Representations: A Unified Look at Multiple Task and Domain Learning
Wei-Hong Li
Xialei Liu
Hakan Bilen
SSL
OOD
28
27
0
06 Apr 2022
Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose
  Estimation
Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation
Jogendra Nath Kundu
Siddharth Seth
Anirudh Gururaj Jamkhandi
Pradyumna
Varun Jampani
Anirban Chakraborty
R. Venkatesh Babu
3DH
21
9
0
05 Apr 2022
Class-Incremental Learning by Knowledge Distillation with Adaptive
  Feature Consolidation
Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation
Minsoo Kang
Jaeyoo Park
Bohyung Han
CLL
27
179
0
02 Apr 2022
R2L: Distilling Neural Radiance Field to Neural Light Field for
  Efficient Novel View Synthesis
R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Huan Wang
Jian Ren
Zeng Huang
Kyle Olszewski
Menglei Chai
Yun Fu
Sergey Tulyakov
42
80
0
31 Mar 2022
Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data
Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data
Corentin Sautier
Gilles Puy
Spyros Gidaris
Alexandre Boulch
Andrei Bursuc
Renaud Marlet
3DPC
26
117
0
30 Mar 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
15
62
0
30 Mar 2022
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
  Detection
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection
Yi Wei
Zibu Wei
Yongming Rao
Jiaxin Li
Jie Zhou
Jiwen Lu
53
63
0
28 Mar 2022
Knowledge Distillation with the Reused Teacher Classifier
Knowledge Distillation with the Reused Teacher Classifier
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
36
166
0
26 Mar 2022
A Cross-Domain Approach for Continuous Impression Recognition from
  Dyadic Audio-Visual-Physio Signals
A Cross-Domain Approach for Continuous Impression Recognition from Dyadic Audio-Visual-Physio Signals
Yuanchao Li
Catherine Lai
19
1
0
25 Mar 2022
Class-Incremental Learning for Action Recognition in Videos
Class-Incremental Learning for Action Recognition in Videos
Jaeyoo Park
Minsoo Kang
Bohyung Han
CLL
24
52
0
25 Mar 2022
R-DFCIL: Relation-Guided Representation Learning for Data-Free Class
  Incremental Learning
R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning
Qiankun Gao
Chen Zhao
Guohao Li
Jian Zhang
CLL
28
61
0
24 Mar 2022
Channel Self-Supervision for Online Knowledge Distillation
Channel Self-Supervision for Online Knowledge Distillation
Shixi Fan
Xuan Cheng
Xiaomin Wang
Chun Yang
Pan Deng
Minghui Liu
Jiali Deng
Meilin Liu
16
1
0
22 Mar 2022
SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
  Lightweight Skin Lesion Classification Using Dermoscopic Images
SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for Lightweight Skin Lesion Classification Using Dermoscopic Images
Yongwei Wang
Yuheng Wang
Tim K. Lee
Chunyan Miao
Z. J. Wang
23
74
0
22 Mar 2022
Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language
  Knowledge Distillation
Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language Knowledge Distillation
Zongyang Ma
Guan Luo
Jin Gao
Liang Li
Yuxin Chen
Shaoru Wang
Congxuan Zhang
Weiming Hu
VLM
ObjD
84
81
0
20 Mar 2022
X-Learner: Learning Cross Sources and Tasks for Universal Visual
  Representation
X-Learner: Learning Cross Sources and Tasks for Universal Visual Representation
Yinan He
Gengshi Huang
Siyu Chen
Jianing Teng
Wang Kun
Zhen-fei Yin
Lu Sheng
Ziwei Liu
Yu Qiao
Jing Shao
VLM
SSL
ViT
43
7
0
16 Mar 2022
Representation Compensation Networks for Continual Semantic Segmentation
Representation Compensation Networks for Continual Semantic Segmentation
Chang-Bin Zhang
Jianqiang Xiao
Xialei Liu
Ying-Cong Chen
Mingg-Ming Cheng
SSeg
CLL
37
93
0
10 Mar 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence,
  Higher Data-efficiency, and Better Transferability
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
34
36
0
10 Mar 2022
MSDN: Mutually Semantic Distillation Network for Zero-Shot Learning
MSDN: Mutually Semantic Distillation Network for Zero-Shot Learning
Shiming Chen
Ziming Hong
Guosen Xie
Wenhan Wang
Qinmu Peng
Kai Wang
Jian-jun Zhao
Xinge You
VLM
23
100
0
07 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
22
6
0
25 Feb 2022
Learn From the Past: Experience Ensemble Knowledge Distillation
Learn From the Past: Experience Ensemble Knowledge Distillation
Chaofei Wang
Shaowei Zhang
S. Song
Gao Huang
33
4
0
25 Feb 2022
Efficient Video Segmentation Models with Per-frame Inference
Efficient Video Segmentation Models with Per-frame Inference
Yifan Liu
Chunhua Shen
Changqian Yu
Jingdong Wang
33
0
0
24 Feb 2022
HRel: Filter Pruning based on High Relevance between Activation Maps and
  Class Labels
HRel: Filter Pruning based on High Relevance between Activation Maps and Class Labels
C. Sarvani
Mrinmoy Ghorai
S. Dubey
S. H. Shabbeer Basha
VLM
39
37
0
22 Feb 2022
Meta Knowledge Distillation
Meta Knowledge Distillation
Jihao Liu
Boxiao Liu
Hongsheng Li
Yu Liu
18
25
0
16 Feb 2022
Exploring Inter-Channel Correlation for Diversity-preserved
  KnowledgeDistillation
Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation
Li Liu
Qingle Huang
Sihao Lin
Hongwei Xie
Bing Wang
Xiaojun Chang
Xiao-Xue Liang
28
100
0
08 Feb 2022
Auto-Transfer: Learning to Route Transferrable Representations
Auto-Transfer: Learning to Route Transferrable Representations
K. Murugesan
Vijay Sadashivaiah
Ronny Luss
Karthikeyan Shanmugam
Pin-Yu Chen
Amit Dhurandhar
AAML
46
5
0
02 Feb 2022
Deconfounded Representation Similarity for Comparison of Neural Networks
Deconfounded Representation Similarity for Comparison of Neural Networks
Tianyu Cui
Yogesh Kumar
Pekka Marttinen
Samuel Kaski
CML
35
13
0
31 Jan 2022
AutoDistil: Few-shot Task-agnostic Neural Architecture Search for
  Distilling Large Language Models
AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
Dongkuan Xu
Subhabrata Mukherjee
Xiaodong Liu
Debadeepta Dey
Wenhui Wang
Xiang Zhang
Ahmed Hassan Awadallah
Jianfeng Gao
25
4
0
29 Jan 2022
Dynamic Rectification Knowledge Distillation
Dynamic Rectification Knowledge Distillation
Fahad Rahman Amik
Ahnaf Ismat Tasin
Silvia Ahmed
M. M. L. Elahi
Nabeel Mohammed
28
5
0
27 Jan 2022
Enabling Deep Learning on Edge Devices through Filter Pruning and
  Knowledge Transfer
Enabling Deep Learning on Edge Devices through Filter Pruning and Knowledge Transfer
Kaiqi Zhao
Yitao Chen
Ming Zhao
25
3
0
22 Jan 2022
It's All in the Head: Representation Knowledge Distillation through
  Classifier Sharing
It's All in the Head: Representation Knowledge Distillation through Classifier Sharing
Emanuel Ben-Baruch
M. Karklinsky
Yossi Biton
Avi Ben-Cohen
Hussam Lawen
Nadav Zamir
24
11
0
18 Jan 2022
STURE: Spatial-Temporal Mutual Representation Learning for Robust Data
  Association in Online Multi-Object Tracking
STURE: Spatial-Temporal Mutual Representation Learning for Robust Data Association in Online Multi-Object Tracking
Haidong Wang
Zhiyong Li
Yaping Li
Ke Nai
Ming Wen
VOT
23
7
0
18 Jan 2022
Cross-modal Contrastive Distillation for Instructional Activity
  Anticipation
Cross-modal Contrastive Distillation for Instructional Activity Anticipation
Zhengyuan Yang
Jingen Liu
Jing-ling Huang
Xiaodong He
Tao Mei
Chenliang Xu
Jiebo Luo
31
6
0
18 Jan 2022
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised
  Knowledge Distillation
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
21
19
0
13 Jan 2022
A Multi-channel Training Method Boost the Performance
A Multi-channel Training Method Boost the Performance
Yingdong Hu
19
1
0
27 Dec 2021
Previous
123...567...121314
Next