ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6550
  4. Cited By
FitNets: Hints for Thin Deep Nets

FitNets: Hints for Thin Deep Nets

19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
    FedML
ArXivPDFHTML

Papers citing "FitNets: Hints for Thin Deep Nets"

50 / 667 papers shown
Title
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
Seonghak Kim
7
0
0
17 May 2025
Tracr-Injection: Distilling Algorithms into Pre-trained Language Models
Tracr-Injection: Distilling Algorithms into Pre-trained Language Models
Tomás Vergara-Browne
Álvaro Soto
12
0
0
15 May 2025
DCSNet: A Lightweight Knowledge Distillation-Based Model with Explainable AI for Lung Cancer Diagnosis from Histopathological Images
DCSNet: A Lightweight Knowledge Distillation-Based Model with Explainable AI for Lung Cancer Diagnosis from Histopathological Images
Sadman Sakib Alif
Nasim Anzum Promise
Fiaz Al Abid
Aniqua Nusrat Zereen
26
0
0
14 May 2025
KDH-MLTC: Knowledge Distillation for Healthcare Multi-Label Text Classification
KDH-MLTC: Knowledge Distillation for Healthcare Multi-Label Text Classification
Hajar Sakai
Sarah Lam
VLM
43
0
0
12 May 2025
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via ααα-βββ-Divergence
Guanghui Wang
Zhiyong Yang
Zihan Wang
Shi Wang
Qianqian Xu
Qingming Huang
42
0
0
07 May 2025
How to Train Your Metamorphic Deep Neural Network
How to Train Your Metamorphic Deep Neural Network
Thomas Sommariva
Simone Calderara
Angelo Porrello
28
0
0
07 May 2025
Robust Understanding of Human-Robot Social Interactions through Multimodal Distillation
Robust Understanding of Human-Robot Social Interactions through Multimodal Distillation
Tongfei Bian
Mathieu Chollet
T. Guha
31
0
0
06 May 2025
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Sanjay Surendranath Girija
Shashank Kapoor
Lakshit Arora
Dipen Pradhan
Aman Raj
Ankit Shetgaonkar
57
0
0
05 May 2025
Uncertainty-Aware Multi-Expert Knowledge Distillation for Imbalanced Disease Grading
Uncertainty-Aware Multi-Expert Knowledge Distillation for Imbalanced Disease Grading
Shuo Tong
Shangde Gao
Ke Liu
Zihang Huang
Hongxia Xu
Haochao Ying
Jian Wu
24
0
0
01 May 2025
Mitigating Catastrophic Forgetting in the Incremental Learning of Medical Images
Mitigating Catastrophic Forgetting in the Incremental Learning of Medical Images
Sara Yavari
Jacob Furst
CLL
58
0
0
28 Apr 2025
Swapped Logit Distillation via Bi-level Teacher Alignment
Swapped Logit Distillation via Bi-level Teacher Alignment
Stephen Ekaputra Limantoro
Jhe-Hao Lin
Chih-Yu Wang
Yi-Lung Tsai
Hong-Han Shuai
Ching-Chun Huang
Wen-Huang Cheng
54
0
0
27 Apr 2025
Learning Critically: Selective Self Distillation in Federated Learning on Non-IID Data
Learning Critically: Selective Self Distillation in Federated Learning on Non-IID Data
Yuting He
Yiqiang Chen
Xiaodong Yang
H. Yu
Yi-Hua Huang
Yang Gu
FedML
63
20
0
20 Apr 2025
Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding
Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding
Jinlong Li
Cristiano Saltori
Fabio Poiesi
N. Sebe
192
0
0
20 Mar 2025
Moss: Proxy Model-based Full-Weight Aggregation in Federated Learning with Heterogeneous Models
Y. Cai
Ziqi Zhang
Ding Li
Yao Guo
Xiangqun Chen
55
0
0
13 Mar 2025
Semantic-Supervised Spatial-Temporal Fusion for LiDAR-based 3D Object Detection
Semantic-Supervised Spatial-Temporal Fusion for LiDAR-based 3D Object Detection
Chaoqun Wang
Xiaobin Hong
Wenzhong Li
Ruimao Zhang
3DPC
174
0
0
13 Mar 2025
SplatPose: Geometry-Aware 6-DoF Pose Estimation from Single RGB Image via 3D Gaussian Splatting
Linqi Yang
Xiongwei Zhao
Qihao Sun
Ke Wang
Ao Chen
Peng Kang
3DGS
83
0
0
07 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
76
0
0
28 Feb 2025
I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation
I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation
Ayoub Karine
Thibault Napoléon
M. Jridi
VLM
109
0
0
24 Feb 2025
Concept Layers: Enhancing Interpretability and Intervenability via LLM Conceptualization
Concept Layers: Enhancing Interpretability and Intervenability via LLM Conceptualization
Or Raphael Bidusa
Shaul Markovitch
61
0
0
20 Feb 2025
Leave No One Behind: Enhancing Diversity While Maintaining Accuracy in Social Recommendation
Lei Li
Xiao Zhou
41
0
0
17 Feb 2025
Compressing Model with Few Class-Imbalance Samples: An Out-of-Distribution Expedition
Compressing Model with Few Class-Imbalance Samples: An Out-of-Distribution Expedition
Tian-Shuang Wu
Shen-Huan Lyu
Ning Chen
Zhihao Qu
Baoliu Ye
OODD
44
0
0
09 Feb 2025
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Cuipeng Wang
Tieyuan Chen
Haipeng Wang
54
0
0
09 Feb 2025
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
80
0
0
28 Jan 2025
QCS: Feature Refining from Quadruplet Cross Similarity for Facial Expression Recognition
QCS: Feature Refining from Quadruplet Cross Similarity for Facial Expression Recognition
Cong Wang
Li Chen
Lili Wang
Zhaofan Li
Xuebin Lv
83
1
0
28 Jan 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
47
0
0
13 Jan 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
40
0
0
06 Jan 2025
Exploring Feature-based Knowledge Distillation for Recommender System: A Frequency Perspective
Exploring Feature-based Knowledge Distillation for Recommender System: A Frequency Perspective
Zhangchi Zhu
Wei Zhang
43
0
0
16 Nov 2024
Quantifying Knowledge Distillation Using Partial Information Decomposition
Quantifying Knowledge Distillation Using Partial Information Decomposition
Pasan Dissanayake
Faisal Hamman
Barproda Halder
Ilia Sucholutsky
Qiuyi Zhang
Sanghamitra Dutta
36
0
0
12 Nov 2024
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
He-Yen Hsieh
Ziyun Li
Sai Qian Zhang
W. Ting
Kao-Den Chang
B. D. Salvo
Chiao Liu
H. T. Kung
VGen
35
0
0
07 Nov 2024
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu
Shengcao Cao
Yu-xiong Wang
55
1
0
18 Oct 2024
TransAgent: Transfer Vision-Language Foundation Models with
  Heterogeneous Agent Collaboration
TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration
Yiwei Guo
Shaobin Zhuang
Kunchang Li
Yu Qiao
Yali Wang
VLM
CLIP
35
0
0
16 Oct 2024
HASN: Hybrid Attention Separable Network for Efficient Image
  Super-resolution
HASN: Hybrid Attention Separable Network for Efficient Image Super-resolution
Weifeng Cao
Xiaoyan Lei
Jun Shi
Wanyong Liang
Jie Liu
Zongfei Bai
SupR
29
0
0
13 Oct 2024
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
72
2
0
03 Oct 2024
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
Mike Ranzinger
Jon Barker
Greg Heinrich
Pavlo Molchanov
Bryan Catanzaro
Andrew Tao
42
5
0
02 Oct 2024
Linear Projections of Teacher Embeddings for Few-Class Distillation
Linear Projections of Teacher Embeddings for Few-Class Distillation
Noel Loo
Fotis Iliopoulos
Wei Hu
Erik Vee
30
0
0
30 Sep 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
41
0
0
30 Sep 2024
Harmonizing knowledge Transfer in Neural Network with Unified
  Distillation
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
34
0
0
27 Sep 2024
Efficient Low-Resolution Face Recognition via Bridge Distillation
Efficient Low-Resolution Face Recognition via Bridge Distillation
Shiming Ge
Shengwei Zhao
Chenyu Li
Yu Zhang
Jia Li
CVBM
33
58
0
18 Sep 2024
Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning
Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning
Amin Karimi Monsefi
Mengxi Zhou
Nastaran Karimi Monsefi
Ser-Nam Lim
Wei-Lun Chao
R. Ramnath
46
1
0
16 Sep 2024
DiReDi: Distillation and Reverse Distillation for AIoT Applications
DiReDi: Distillation and Reverse Distillation for AIoT Applications
Chen Sun
Qing Tong
Wenshuang Yang
Wenqi Zhang
34
0
0
12 Sep 2024
Look One and More: Distilling Hybrid Order Relational Knowledge for
  Cross-Resolution Image Recognition
Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition
Shiming Ge
Kangkai Zhang
Haolin Liu
Yingying Hua
Shengwei Zhao
Xin Jin
Hao Wen
28
24
0
09 Sep 2024
Unleashing the Power of Generic Segmentation Models: A Simple Baseline
  for Infrared Small Target Detection
Unleashing the Power of Generic Segmentation Models: A Simple Baseline for Infrared Small Target Detection
Mingjin Zhang
Chi Zhang
Qiming Zhang
Yunsong Li
Xinbo Gao
Jing Zhang
VLM
30
3
0
07 Sep 2024
Learning Privacy-Preserving Student Networks via
  Discriminative-Generative Distillation
Learning Privacy-Preserving Student Networks via Discriminative-Generative Distillation
Shiming Ge
Bochao Liu
Pengju Wang
Yong Li
Dan Zeng
FedML
44
9
0
04 Sep 2024
Continual Distillation Learning: Knowledge Distillation in Prompt-based Continual Learning
Continual Distillation Learning: Knowledge Distillation in Prompt-based Continual Learning
Qifan Zhang
Yunhui Guo
Yu Xiang
CLL
VLM
56
0
0
18 Jul 2024
Relational Representation Distillation
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
40
0
0
16 Jul 2024
Distilling System 2 into System 1
Distilling System 2 into System 1
Ping Yu
Jing Xu
Jason Weston
Ilia Kulikov
OffRL
LRM
52
62
0
08 Jul 2024
Cross-Architecture Auxiliary Feature Space Translation for Efficient
  Few-Shot Personalized Object Detection
Cross-Architecture Auxiliary Feature Space Translation for Efficient Few-Shot Personalized Object Detection
F. Barbato
Umberto Michieli
J. Moon
Pietro Zanuttigh
Mete Ozay
42
2
0
01 Jul 2024
Federated Graph Semantic and Structural Learning
Federated Graph Semantic and Structural Learning
Wenke Huang
Guancheng Wan
Mang Ye
Bo Du
FedML
36
43
0
27 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
50
2
0
12 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
93
1
0
06 Jun 2024
1234...121314
Next