Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1811.03233
Cited By
v1
v2 (latest)
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
AAAI Conference on Artificial Intelligence (AAAI), 2018
8 November 2018
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons"
50 / 264 papers shown
Title
Transferring Knowledge from Large Foundation Models to Small Downstream Models
Shikai Qiu
Boran Han
Danielle C. Maddix
Shuai Zhang
Yuyang Wang
Andrew Gordon Wilson
187
7
0
11 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
589
1
0
06 Jun 2024
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Jash Dalvi
Ali Dabouei
Gunjan Dhanuka
Min Xu
287
3
0
05 Jun 2024
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model
Jinyin Chen
Xiaoming Zhao
Haibin Zheng
Xiao Li
Sheng Xiang
Haifeng Guo
AAML
140
7
0
01 Jun 2024
Label-efficient Semantic Scene Completion with Scribble Annotations
Song Wang
Jiawei Yu
Wentong Li
Hao Shi
Kailun Yang
Junbo Chen
Jianke Zhu
229
8
0
24 May 2024
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch
Wen-Shu Fan
Xin-Chun Li
Bowen Tao
290
2
0
21 May 2024
Cross-Domain Knowledge Distillation for Low-Resolution Human Pose Estimation
Zejun Gu
Zhongming Zhao
Henghui Ding
Hao Shen
Zhao Zhang
De-Shuang Huang
204
0
0
19 May 2024
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing
Computer Vision and Pattern Recognition (CVPR), 2024
Yunlong Zhao
Xiaoheng Deng
Yijing Liu
Xin-jun Pei
Jiazhi Xia
Wei Chen
AAML
200
4
0
18 May 2024
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph
Zhiwei Wang
Jun Huang
Longhua Ma
Chengyu Wu
Hongyu Ma
252
0
0
14 May 2024
CNN2GNN: How to Bridge CNN with GNN
Ziheng Jiao
Hongyuan Zhang
Xuelong Li
176
19
0
23 Apr 2024
Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning
Zhengyang Liang
Meiyu Liang
Wei Huang
Yawen Li
Zhe Xue
234
1
0
16 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
216
0
0
04 Apr 2024
Task Integration Distillation for Object Detectors
Hai Su
ZhenWen Jian
Songsen Yu
163
1
0
02 Apr 2024
Scheduled Knowledge Acquisition on Lightweight Vector Symbolic Architectures for Brain-Computer Interfaces
Yejia Liu
Shijin Duan
Xiaolin Xu
Shaolei Ren
240
1
0
18 Mar 2024
Self-Supervised Quantization-Aware Knowledge Distillation
Kaiqi Zhao
Ming Zhao
MQ
177
10
0
17 Mar 2024
Histo-Genomic Knowledge Distillation For Cancer Prognosis From Histopathology Whole Slide Images
Zhikang Wang
Yumeng Zhang
Yingxue Xu
S. Imoto
Hao Chen
Jiangning Song
157
8
0
15 Mar 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
303
5
0
13 Mar 2024
Distilling the Knowledge in Data Pruning
Emanuel Ben-Baruch
Adam Botach
Igor Kviatkovsky
Manoj Aggarwal
Gérard Medioni
212
2
0
12 Mar 2024
V
k
D
:
V_kD:
V
k
D
:
Improving Knowledge Distillation using Orthogonal Projections
Computer Vision and Pattern Recognition (CVPR), 2024
Roy Miles
Ismail Elezi
Jiankang Deng
286
20
0
10 Mar 2024
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features
Computer Vision and Pattern Recognition (CVPR), 2024
Geonho Bang
Kwangjin Choi
Jisong Kim
Dongsuk Kum
Jun Won Choi
447
39
0
08 Mar 2024
Sinkhorn Distance Minimization for Knowledge Distillation
Xiao Cui
Yulei Qin
Yuting Gao
Enwei Zhang
Zihan Xu
Tong Wu
Ke Li
Xing Sun
Wen-gang Zhou
Houqiang Li
177
19
0
27 Feb 2024
TIE-KD: Teacher-Independent and Explainable Knowledge Distillation for Monocular Depth Estimation
Sangwon Choi
Daejune Choi
Duksu Kim
139
6
0
22 Feb 2024
Can LLMs Compute with Reasons?
Harshit Sandilya
Peehu Raj
J. Bafna
Srija Mukhopadhyay
Shivansh Sharma
Ellwil Sharma
Arastu Sharma
Neeta Trivedi
Manish Shrivastava
Rajesh Kumar
LRM
180
0
0
19 Feb 2024
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation
Ayan Banerjee
Sanket Biswas
Josep Lladós
Umapada Pal
251
2
0
17 Feb 2024
Cooperative Knowledge Distillation: A Learner Agnostic Approach
Michael J. Livanos
Ian Davidson
Stephen Wong
109
1
0
02 Feb 2024
Progressive Multi-task Anti-Noise Learning and Distilling Frameworks for Fine-grained Vehicle Recognition
Dichao Liu
162
4
0
25 Jan 2024
Bayesian adaptive learning to latent variables via Variational Bayes and Maximum a Posteriori
Hu Hu
Sabato Marco Siniscalchi
Chin-Hui Lee
BDL
136
1
0
24 Jan 2024
Federated Continual Learning via Knowledge Fusion: A Survey
Xin Yang
Hao Yu
Xin Gao
Hao Wang
Junbo Zhang
Tianrui Li
FedML
232
80
0
27 Dec 2023
ShiftKD: Benchmarking Knowledge Distillation under Distribution Shift
Songming Zhang
Ziyu Lyu
Ziyu Lyu
Xiaofeng Chen
191
2
0
25 Dec 2023
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Chengming Hu
Haolun Wu
Xuan Li
Chen Ma
Xi Chen
Jun Yan
Boyu Wang
Xue Liu
355
3
0
22 Dec 2023
StableKD: Breaking Inter-block Optimization Entanglement for Stable Knowledge Distillation
Shiu-hong Kao
Jierun Chen
S.-H. Gary Chan
238
0
0
20 Dec 2023
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
Yi Guo
Yiqian He
Xiaoyang Li
Haotong Qin
Van Tung Pham
Yang Zhang
Shouda Liu
244
1
0
14 Dec 2023
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS
Neural Information Processing Systems (NeurIPS), 2023
Zhiwen Fan
Kevin Wang
Kairun Wen
Zehao Zhu
Dejia Xu
Zinan Lin
3DGS
690
390
0
28 Nov 2023
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Knowledge-Based Systems (KBS), 2023
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
573
9
0
24 Nov 2023
Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models
Neural Information Processing Systems (NeurIPS), 2023
Andy Zhou
Jindong Wang
Yu-Xiong Wang
Haohan Wang
VLM
216
8
0
02 Nov 2023
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara
VLM
231
1
0
26 Oct 2023
Exploiting User Comments for Early Detection of Fake News Prior to Users' Commenting
Qiong Nan
Qiang Sheng
Juan Cao
Yongchun Zhu
Danding Wang
Guang Yang
Jintao Li
Kai Shu
284
12
0
16 Oct 2023
Learning Unified Representations for Multi-Resolution Face Recognition
Hulingxiao He
Wu Yuan
Yidian Huang
Shilong Zhao
Wen Yuan
Hanqin Li
CVBM
151
0
0
14 Oct 2023
LumiNet: Perception-Driven Knowledge Distillation via Statistical Logit Calibration
M. Hossain
M. M. L. Elahi
Sameera Ramasinghe
A. Cheraghian
Fuad Rahman
Nabeel Mohammed
Shafin Rahman
310
2
0
05 Oct 2023
Improving Knowledge Distillation with Teacher's Explanation
S. Chowdhury
Ben Liang
A. Tizghadam
Ilijc Albanese
FAtt
120
1
0
04 Oct 2023
Heterogeneous Generative Knowledge Distillation with Masked Image Modeling
Ziming Wang
Shumin Han
Xiaodi Wang
Jing Hao
Xianbin Cao
Baochang Zhang
VLM
216
1
0
18 Sep 2023
Knowledge Distillation Layer that Lets the Student Decide
British Machine Vision Conference (BMVC), 2023
Ada Gorgun
Y. Z. Gürbüz
A. Aydin Alatan
183
0
0
06 Sep 2023
Fine-tuning can cripple your foundation model; preserving features may be the solution
Jishnu Mukhoti
Y. Gal
Juil Sock
P. Dokania
CLL
356
67
0
25 Aug 2023
Representation Disparity-aware Distillation for 3D Object Detection
IEEE International Conference on Computer Vision (ICCV), 2023
Yanjing Li
Sheng Xu
Mingbao Lin
Jihao Yin
Baochang Zhang
Xianbin Cao
110
6
0
20 Aug 2023
SRMAE: Masked Image Modeling for Scale-Invariant Deep Representations
Chinese Conference on Pattern Recognition and Computer Vision (CPRCV), 2023
Zhiming Wang
Lin Gu
Feng Lu
222
1
0
17 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
260
39
0
08 Aug 2023
DOT: A Distillation-Oriented Trainer
IEEE International Conference on Computer Vision (ICCV), 2023
Borui Zhao
Quan Cui
Renjie Song
Jiajun Liang
145
11
0
17 Jul 2023
Frameless Graph Knowledge Distillation
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Dai Shi
Zhiqi Shao
Yi Guo
Junbin Gao
162
4
0
13 Jul 2023
The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework
Chao Wang
Zhenghang Tang
248
4
0
11 Jul 2023
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
504
34
0
19 Jun 2023
Previous
1
2
3
4
5
6
Next