ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6550
  4. Cited By
FitNets: Hints for Thin Deep Nets

FitNets: Hints for Thin Deep Nets

19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
    FedML
ArXivPDFHTML

Papers citing "FitNets: Hints for Thin Deep Nets"

50 / 667 papers shown
Title
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Jash Dalvi
Ali Dabouei
Gunjan Dhanuka
Min Xu
23
0
0
05 Jun 2024
Robust Preference Optimization through Reward Model Distillation
Robust Preference Optimization through Reward Model Distillation
Adam Fisch
Jacob Eisenstein
Vicky Zayats
Alekh Agarwal
Ahmad Beirami
Chirag Nagpal
Peter Shaw
Jonathan Berant
81
22
0
29 May 2024
Relational Self-supervised Distillation with Compact Descriptors for
  Image Copy Detection
Relational Self-supervised Distillation with Compact Descriptors for Image Copy Detection
Juntae Kim
Sungwon Woo
Jongho Nang
42
1
0
28 May 2024
OmniBind: Teach to Build Unequal-Scale Modality Interaction for
  Omni-Bind of All
OmniBind: Teach to Build Unequal-Scale Modality Interaction for Omni-Bind of All
Yuanhuiyi Lyu
Xueye Zheng
Dahun Kim
Lin Wang
51
14
0
25 May 2024
Retro: Reusing teacher projection head for efficient embedding
  distillation on Lightweight Models via Self-supervised Learning
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen
Chae Jung Park
34
0
0
24 May 2024
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss
  Weighting
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting
Shreyan Ganguly
Roshan Nayak
Rakshith Rao
Ujan Deb
AP Prathosh
32
1
0
11 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
64
1
0
22 Apr 2024
LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models
LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models
Dingkun Zhang
Sijia Li
Chen Chen
Qingsong Xie
H. Lu
39
22
0
17 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
Learning to Project for Cross-Task Knowledge Distillation
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
40
0
0
21 Mar 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
40
1
0
13 Mar 2024
Adversarial Sparse Teacher: Defense Against Distillation-Based Model
  Stealing Attacks Using Adversarial Examples
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples
Eda Yilmaz
H. Keles
AAML
21
2
0
08 Mar 2024
Attention-guided Feature Distillation for Semantic Segmentation
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
33
0
0
08 Mar 2024
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features
Geonho Bang
Kwangjin Choi
Jisong Kim
Dongsuk Kum
Jun Won Choi
46
13
0
08 Mar 2024
Teaching MLP More Graph Information: A Three-stage Multitask Knowledge
  Distillation Framework
Teaching MLP More Graph Information: A Three-stage Multitask Knowledge Distillation Framework
Junxian Li
Bin Shi
Erfei Cui
Hua Wei
Qinghua Zheng
49
0
0
02 Mar 2024
Knowledge Distillation Based on Transformed Teacher Matching
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
32
19
0
17 Feb 2024
Efficient Multi-task Uncertainties for Joint Semantic Segmentation and
  Monocular Depth Estimation
Efficient Multi-task Uncertainties for Joint Semantic Segmentation and Monocular Depth Estimation
S. Landgraf
Markus Hillemann
Theodor Kapler
Markus Ulrich
UQCV
31
8
0
16 Feb 2024
Embedding Compression for Teacher-to-Student Knowledge Transfer
Embedding Compression for Teacher-to-Student Knowledge Transfer
Yiwei Ding
Alexander Lerch
26
1
0
09 Feb 2024
Continual Learning on Graphs: A Survey
Continual Learning on Graphs: A Survey
Zonggui Tian
Duanhao Zhang
Hong-Ning Dai
47
5
0
09 Feb 2024
Data-efficient Large Vision Models through Sequential Autoregression
Data-efficient Large Vision Models through Sequential Autoregression
Jianyuan Guo
Zhiwei Hao
Chengcheng Wang
Yehui Tang
Han Wu
Han Hu
Kai Han
Chang Xu
VLM
38
10
0
07 Feb 2024
Iterative Data Smoothing: Mitigating Reward Overfitting and
  Overoptimization in RLHF
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu
Michael I. Jordan
Jiantao Jiao
31
25
0
29 Jan 2024
Distilling Privileged Multimodal Information for Expression Recognition
  using Optimal Transport
Distilling Privileged Multimodal Information for Expression Recognition using Optimal Transport
Haseeb Aslam
Muhammad Osama Zeeshan
Soufiane Belharbi
M. Pedersoli
A. L. Koerich
Simon L Bacon
Eric Granger
28
9
0
27 Jan 2024
Mutual Distillation Learning For Person Re-Identification
Mutual Distillation Learning For Person Re-Identification
Huiyuan Fu
Kuilong Cui
Chuanming Wang
Mengshi Qi
Huadong Ma
40
0
0
12 Jan 2024
Knowledge Translation: A New Pathway for Model Compression
Knowledge Translation: A New Pathway for Model Compression
Wujie Sun
Defang Chen
Jiawei Chen
Yan Feng
Chun-Yen Chen
Can Wang
28
0
0
11 Jan 2024
Revisiting Knowledge Distillation under Distribution Shift
Revisiting Knowledge Distillation under Distribution Shift
Songming Zhang
Ziyu Lyu
Xiaofeng Chen
32
1
0
25 Dec 2023
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge
  Distillation
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Chengming Hu
Haolun Wu
Xuan Li
Chen Ma
Xi Chen
Jun Yan
Boyu Wang
Xue Liu
35
3
0
22 Dec 2023
TinySAM: Pushing the Envelope for Efficient Segment Anything Model
TinySAM: Pushing the Envelope for Efficient Segment Anything Model
Han Shu
Wenshuo Li
Yehui Tang
Yiman Zhang
Yihao Chen
Houqiang Li
Yunhe Wang
Xinghao Chen
VLM
44
19
0
21 Dec 2023
Expediting Contrastive Language-Image Pretraining via Self-distilled
  Encoders
Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders
Bumsoo Kim
Jinhyung Kim
Yeonsik Jo
S. Kim
VLM
29
3
0
19 Dec 2023
Decoupled Knowledge with Ensemble Learning for Online Distillation
Decoupled Knowledge with Ensemble Learning for Online Distillation
Baitan Shao
Ying Chen
26
0
0
18 Dec 2023
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains
  Into One
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
Michael Ranzinger
Greg Heinrich
Jan Kautz
Pavlo Molchanov
VLM
44
42
0
10 Dec 2023
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment
  Anything
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Yunyang Xiong
Bala Varadarajan
Lemeng Wu
Xiaoyu Xiang
Fanyi Xiao
...
Dilin Wang
Fei Sun
Forrest N. Iandola
Raghuraman Krishnamoorthi
Vikas Chandra
VLM
42
140
0
01 Dec 2023
Choosing Wisely and Learning Deeply: Selective Cross-Modality
  Distillation via CLIP for Domain Generalization
Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
Jixuan Leng
Yijiang Li
Haohan Wang
VLM
34
0
0
26 Nov 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
30
2
0
23 Nov 2023
Semi-supervised ViT knowledge distillation network with style transfer
  normalization for colorectal liver metastases survival prediction
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction
Mohamed El Amine Elforaici
E. Montagnon
Francisco Perdigon Romero
W. Le
F. Azzi
Dominique Trudel
Bich Nguyen
Simon Turcotte
An Tang
Samuel Kadoury
MedIm
36
2
0
17 Nov 2023
A Transformer-Based Model With Self-Distillation for Multimodal Emotion
  Recognition in Conversations
A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations
Hui Ma
Jian Wang
Hongfei Lin
Bo Zhang
Yijia Zhang
Bo Xu
23
40
0
31 Oct 2023
MUST: A Multilingual Student-Teacher Learning approach for low-resource
  speech recognition
MUST: A Multilingual Student-Teacher Learning approach for low-resource speech recognition
Muhammad Umar Farooq
Rehan Ahmad
Thomas Hain
25
0
0
29 Oct 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Gramian Attention Heads are Strong yet Efficient Vision Learners
Gramian Attention Heads are Strong yet Efficient Vision Learners
Jongbin Ryu
Dongyoon Han
J. Lim
35
1
0
25 Oct 2023
MAC: ModAlity Calibration for Object Detection
MAC: ModAlity Calibration for Object Detection
Yutian Lei
Jun Liu
Dong Huang
ObjD
15
0
0
14 Oct 2023
Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video
  Retrieval
Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval
P. Li
Hongtao Xie
Jiannan Ge
Lei Zhang
Shaobo Min
Yongdong Zhang
23
17
0
12 Oct 2023
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
  Analysis
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud Analysis
Peipei Li
Xing Cui
Yibo Hu
Man Zhang
Ting Yao
Tao Mei
27
0
0
08 Oct 2023
Heterogeneous Federated Learning Using Knowledge Codistillation
Heterogeneous Federated Learning Using Knowledge Codistillation
Jared Lichtarge
Ehsan Amid
Shankar Kumar
Tien-Ju Yang
Rohan Anil
Rajiv Mathews
FedML
34
0
0
04 Oct 2023
VideoAdviser: Video Knowledge Distillation for Multimodal Transfer
  Learning
VideoAdviser: Video Knowledge Distillation for Multimodal Transfer Learning
Yanan Wang
Donghuo Zeng
Shinya Wada
Satoshi Kurihara
32
6
0
27 Sep 2023
DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal
  Knowledge Distillation
DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal Knowledge Distillation
Zeyu Wang
Dingwen Li
Chenxu Luo
Cihang Xie
Xiaodong Yang
46
23
0
26 Sep 2023
Weight Averaging Improves Knowledge Distillation under Domain Shift
Weight Averaging Improves Knowledge Distillation under Domain Shift
Valeriy Berezovskiy
Nikita Morozov
MoMe
31
1
0
20 Sep 2023
Towards Comparable Knowledge Distillation in Semantic Image Segmentation
Towards Comparable Knowledge Distillation in Semantic Image Segmentation
Onno Niemann
Christopher Vox
Thorben Werner
VLM
25
1
0
07 Sep 2023
Improving Small Footprint Few-shot Keyword Spotting with Supervision on
  Auxiliary Data
Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Seunghan Yang
Byeonggeun Kim
Kyuhong Shim
Simyoung Chang
29
1
0
31 Aug 2023
Machine Unlearning Methodology base on Stochastic Teacher Network
Machine Unlearning Methodology base on Stochastic Teacher Network
Xulong Zhang
Jianzong Wang
Ning Cheng
Yifu Sun
Chuanyao Zhang
Jing Xiao
MU
29
4
0
28 Aug 2023
Rethinking Client Drift in Federated Learning: A Logit Perspective
Rethinking Client Drift in Federated Learning: A Logit Perspective
Yu-bao Yan
Chun-Mei Feng
Senior Member Ieee Wangmeng Zuo Senior Member Ieee Mang Ye
Mong Goh
Ping Li
Rick Siow
Lei Zhu
F. I. C. L. Philip Chen
FedML
42
8
0
20 Aug 2023
Multi-Label Knowledge Distillation
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
36
10
0
12 Aug 2023
Previous
12345...121314
Next