ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6550
  4. Cited By
FitNets: Hints for Thin Deep Nets

FitNets: Hints for Thin Deep Nets

19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
    FedML
ArXivPDFHTML

Papers citing "FitNets: Hints for Thin Deep Nets"

50 / 667 papers shown
Title
Foreground Object Search by Distilling Composite Image Feature
Foreground Object Search by Distilling Composite Image Feature
Bo Zhang
Jiacheng Sui
Li Niu
30
5
0
09 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Accurate Retraining-free Pruning for Pretrained Encoder-based Language
  Models
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models
Seungcheol Park
Ho-Jin Choi
U. Kang
VLM
40
5
0
07 Aug 2023
Cross-dimensional transfer learning in medical image segmentation with
  deep learning
Cross-dimensional transfer learning in medical image segmentation with deep learning
Hicham Messaoudi
Ahror Belaid
Douraied BEN SALEM
Pierre-Henri Conze
MedIm
30
24
0
29 Jul 2023
Distilling Universal and Joint Knowledge for Cross-Domain Model
  Compression on Time Series Data
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data
Qing Xu
Min-man Wu
Xiaoli Li
K. Mao
Zhenghua Chen
19
5
0
07 Jul 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
60
1
0
03 Jul 2023
Q-YOLO: Efficient Inference for Real-time Object Detection
Q-YOLO: Efficient Inference for Real-time Object Detection
Mingze Wang
H. Sun
Jun Shi
Xuhui Liu
Baochang Zhang
Xianbin Cao
ObjD
42
8
0
01 Jul 2023
Reducing the gap between streaming and non-streaming Transducer-based
  ASR by adaptive two-stage knowledge distillation
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation
Haitao Tang
Yu Fu
Lei Sun
Jiabin Xue
Dan Liu
...
Zhiqiang Ma
Minghui Wu
Jia Pan
Genshun Wan
Ming’En Zhao
23
2
0
27 Jun 2023
Cross Architecture Distillation for Face Recognition
Cross Architecture Distillation for Face Recognition
Weisong Zhao
Xiangyu Zhu
Zhixiang He
Xiaoyu Zhang
Zhen Lei
CVBM
19
6
0
26 Jun 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
40
32
0
20 Jun 2023
Depth and DOF Cues Make A Better Defocus Blur Detector
Depth and DOF Cues Make A Better Defocus Blur Detector
Yuxin Jin
Ming Qian
Jincheng Xiong
Nan Xue
Guisong Xia
10
3
0
20 Jun 2023
LoSparse: Structured Compression of Large Language Models based on
  Low-Rank and Sparse Approximation
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation
Yixiao Li
Yifan Yu
Qingru Zhang
Chen Liang
Pengcheng He
Weizhu Chen
Tuo Zhao
44
69
0
20 Jun 2023
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Baoyuan Wu
Chun Yuan
Dacheng Tao
49
7
0
28 May 2023
Improving Knowledge Distillation via Regularizing Feature Norm and
  Direction
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
39
20
0
26 May 2023
Knowledge Diffusion for Distillation
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
37
51
0
25 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
39
38
0
23 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
34
19
0
22 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
20
17
0
18 May 2023
Analyzing Compression Techniques for Computer Vision
Analyzing Compression Techniques for Computer Vision
Maniratnam Mandal
Imran Khan
27
1
0
14 May 2023
CORSD: Class-Oriented Relational Self Distillation
CORSD: Class-Oriented Relational Self Distillation
Muzhou Yu
S. Tan
Kailu Wu
Runpei Dong
Linfeng Zhang
Kaisheng Ma
24
0
0
28 Apr 2023
Pre-trained Embeddings for Entity Resolution: An Experimental Analysis
  [Experiment, Analysis & Benchmark]
Pre-trained Embeddings for Entity Resolution: An Experimental Analysis [Experiment, Analysis & Benchmark]
Alexandros Zeakis
G. Papadakis
Dimitrios Skoutas
Manolis Koubarakis
32
37
0
24 Apr 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
49
18
0
24 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
18
1
0
19 Apr 2023
Constructing Deep Spiking Neural Networks from Artificial Neural
  Networks with Knowledge Distillation
Constructing Deep Spiking Neural Networks from Artificial Neural Networks with Knowledge Distillation
Qi Xu
Yaxin Li
Jiangrong Shen
Jian K. Liu
Huajin Tang
Gang Pan
24
62
0
12 Apr 2023
Grouped Knowledge Distillation for Deep Face Recognition
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
23
6
0
10 Apr 2023
Geometric-aware Pretraining for Vision-centric 3D Object Detection
Geometric-aware Pretraining for Vision-centric 3D Object Detection
Linyan Huang
Huijie Wang
J. Zeng
Shengchuan Zhang
Liujuan Cao
Junchi Yan
Hongyang Li
3DPC
70
9
0
06 Apr 2023
Self-Distillation for Gaussian Process Regression and Classification
Self-Distillation for Gaussian Process Regression and Classification
Kenneth Borup
L. Andersen
11
2
0
05 Apr 2023
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Sheng Xu
Yanjing Li
Mingbao Lin
Penglei Gao
Guodong Guo
Jinhu Lu
Baochang Zhang
MQ
29
23
0
01 Apr 2023
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
Ximeng Sun
Pengchuan Zhang
Peizhao Zhang
Hardik Shah
Kate Saenko
Xide Xia
VLM
25
20
0
31 Mar 2023
CAMEL: Communicative Agents for "Mind" Exploration of Large Language
  Model Society
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
Ge Li
Hasan Hammoud
Hani Itani
Dmitrii Khizbullin
Guohao Li
SyDa
ALM
44
412
0
31 Mar 2023
Decomposed Cross-modal Distillation for RGB-based Temporal Action
  Detection
Decomposed Cross-modal Distillation for RGB-based Temporal Action Detection
Pilhyeon Lee
Taeoh Kim
Minho Shim
Dongyoon Wee
H. Byun
36
11
0
30 Mar 2023
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Y. Cheng
Yichao Yan
Wenhan Zhu
Ye Pan
Bowen Pan
Xiaokang Yang
3DH
37
3
0
28 Mar 2023
UniDistill: A Universal Cross-Modality Knowledge Distillation Framework
  for 3D Object Detection in Bird's-Eye View
UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View
Shengchao Zhou
Weizhou Liu
Chen Hu
Shuchang Zhou
Chaoxiang Ma
26
44
0
27 Mar 2023
CAT:Collaborative Adversarial Training
CAT:Collaborative Adversarial Training
Xingbin Liu
Huafeng Kuang
Xianming Lin
Yongjian Wu
Rongrong Ji
AAML
22
4
0
27 Mar 2023
Decoupled Multimodal Distilling for Emotion Recognition
Decoupled Multimodal Distilling for Emotion Recognition
Yong Li
Yuan-Zheng Wang
Zhen Cui
21
73
0
24 Mar 2023
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Aneeshan Sain
A. Bhunia
Subhadeep Koley
Pinaki Nath Chowdhury
Soumitri Chattopadhyay
Tao Xiang
Yi-Zhe Song
28
18
0
24 Mar 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified
  Approach with Normalized Loss and Customized Soft Labels
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
29
73
0
23 Mar 2023
MV-MR: multi-views and multi-representations for self-supervised
  learning and knowledge distillation
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
Vitaliy Kinakh
M. Drozdova
Slava Voloshynovskiy
40
1
0
21 Mar 2023
Performance-aware Approximation of Global Channel Pruning for Multitask
  CNNs
Performance-aware Approximation of Global Channel Pruning for Multitask CNNs
Hancheng Ye
Bo-Wen Zhang
Tao Chen
Jiayuan Fan
Bin Wang
32
18
0
21 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
53
2
0
15 Mar 2023
SCPNet: Semantic Scene Completion on Point Cloud
SCPNet: Semantic Scene Completion on Point Cloud
Zhaoyang Xia
You-Chen Liu
Xin Li
Xinge Zhu
Yuexin Ma
Yikang Li
Yuenan Hou
Yu Qiao
36
70
0
13 Mar 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
32
7
0
02 Mar 2023
Distillation from Heterogeneous Models for Top-K Recommendation
Distillation from Heterogeneous Models for Top-K Recommendation
SeongKu Kang
Wonbin Kweon
Dongha Lee
Jianxun Lian
Xing Xie
Hwanjo Yu
VLM
35
21
0
02 Mar 2023
Towards domain generalisation in ASR with elitist sampling and ensemble
  knowledge distillation
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation
Rehan Ahmad
Md. Asif Jalal
Muhammad Umar Farooq
A. Ollerenshaw
Thomas Hain
18
2
0
01 Mar 2023
Generic-to-Specific Distillation of Masked Autoencoders
Generic-to-Specific Distillation of Masked Autoencoders
Wei Huang
Zhiliang Peng
Li Dong
Furu Wei
Jianbin Jiao
QiXiang Ye
32
22
0
28 Feb 2023
Analyzing Populations of Neural Networks via Dynamical Model Embedding
Analyzing Populations of Neural Networks via Dynamical Model Embedding
Jordan S. Cotler
Kai Sheng Tai
Felipe Hernández
Blake Elias
David Sussillo
17
4
0
27 Feb 2023
Graph-based Knowledge Distillation: A survey and experimental evaluation
Graph-based Knowledge Distillation: A survey and experimental evaluation
Jing Liu
Tongya Zheng
Guanzheng Zhang
Qinfen Hao
33
8
0
27 Feb 2023
OccDepth: A Depth-Aware Method for 3D Semantic Scene Completion
OccDepth: A Depth-Aware Method for 3D Semantic Scene Completion
Ruihang Miao
Weizhou Liu
Ming-lei Chen
Zheng Gong
Weixin Xu
Chen Hu
Shuchang Zhou
33
80
0
27 Feb 2023
LightTS: Lightweight Time Series Classification with Adaptive Ensemble
  Distillation -- Extended Version
LightTS: Lightweight Time Series Classification with Adaptive Ensemble Distillation -- Extended Version
David Campos
Miao Zhang
B. Yang
Tung Kieu
Chenjuan Guo
Christian S. Jensen
AI4TS
45
47
0
24 Feb 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
40
2
0
22 Feb 2023
Previous
123456...121314
Next