Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
1811.03233
Cited By
v1
v2 (latest)
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
AAAI Conference on Artificial Intelligence (AAAI), 2018
8 November 2018
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons"
50 / 264 papers shown
Title
ABM-LoRA: Activation Boundary Matching for Fast Convergence in Low-Rank Adaptation
Dongha Lee
Jinhee Park
Minjun Kim
Junseok Kwon
AI4CE
239
0
0
24 Nov 2025
Do Students Debias Like Teachers? On the Distillability of Bias Mitigation Methods
Jiali Cheng
Chirag Agarwal
Hadi Amiri
92
0
0
30 Oct 2025
UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations
Fengming Yu
Haiwei Pan
Kejia Zhang
Jian Guan
Haiying Jiang
129
0
0
28 Oct 2025
Parameter-Free Logit Distillation via Sorting Mechanism
IEEE Signal Processing Letters (IEEE SPL), 2025
Stephen Ekaputra Limantoro
76
0
0
22 Aug 2025
Expandable Residual Approximation for Knowledge Distillation
IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2025
Zhaoyi Yan
Binghui Chen
Yunfan Liu
Qixiang Ye
CLL
113
0
0
22 Aug 2025
TopKD: Top-scaled Knowledge Distillation
Qi Wang
Jinjia Zhou
99
0
0
06 Aug 2025
Beyond Gloss: A Hand-Centric Framework for Gloss-Free Sign Language Translation
Sobhan Asasi
Mohamed Ilyas Lakhal
Ozge Mercanoglu Sincan
Richard Bowden
SLR
182
0
0
31 Jul 2025
ASC-SW: Atrous strip convolution network with sliding windows
Cheng Liu
Fan Zhu
Yifeng Xu
Baoru Huang
Mohd Rizal Arshad
148
0
0
17 Jul 2025
Frequency-Aligned Knowledge Distillation for Lightweight Spatiotemporal Forecasting
Yuqi Li
Chuanguang Yang
Hansheng Zeng
Zeyu Dong
Zhulin An
Yongjun Xu
Yingli Tian
Hao Wu
AI4TS
214
23
0
27 Jun 2025
A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge
Tarique Dahri
Zulfiqar Ali Memon
Zhenyu Yu
Mohd Yamani Idna Idris
Sheheryar Khan
Sadiq Ahmad
Maged Shoman
Saddam Aziz
Rizwan Qureshi
177
0
0
08 Jun 2025
Bidirectional Knowledge Distillation for Enhancing Sequential Recommendation with Large Language Models
Jiongran Wu
Jiahao Liu
Dongsheng Li
Guangping Zhang
Mingzhe Han
Hansu Gu
Peng Zhang
Li Shang
Tun Lu
Ning Gu
129
0
0
23 May 2025
Swapped Logit Distillation via Bi-level Teacher Alignment
Stephen Ekaputra Limantoro
Jhe-Hao Lin
Chih-Yu Wang
Yi-Lung Tsai
Hong-Han Shuai
Ching-Chun Huang
Wen-Huang Cheng
349
1
0
27 Apr 2025
An Efficient Training Algorithm for Models with Block-wise Sparsity
Ding Zhu
Zhiqun Zuo
Mohammad Mahdi Khalili
189
0
0
27 Mar 2025
MIDAS: Modeling Ground-Truth Distributions with Dark Knowledge for Domain Generalized Stereo Matching
Peng Xu
Zhiyu Xiang
Jingyun Fu
Tianyu Pu
Hanzhi Zhong
Eryun Liu
OOD
276
1
0
06 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
418
1
0
28 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
277
0
0
10 Feb 2025
Dynamic Frequency-Adaptive Knowledge Distillation for Speech Enhancement
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
Xihao Yuan
Siqi Liu
Hanting Chen
Lu Zhou
Jian Li
Jie Hu
115
4
0
07 Feb 2025
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer
IEEE Transactions on Audio, Speech, and Language Processing (TASLP), 2025
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
212
0
0
28 Jan 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
282
0
0
13 Jan 2025
On Distilling the Displacement Knowledge for Few-Shot Class-Incremental Learning
Pengfei Fang
Yongchun Qin
H. Xue
CLL
299
0
0
15 Dec 2024
Multi-Surrogate-Teacher Assistance for Representation Alignment in Fingerprint-based Indoor Localization
IEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Son Minh Nguyen
Linh Duy Tran
Duc Viet Le
Paul J. M Havinga
187
1
0
13 Dec 2024
Rethinking the Intermediate Features in Adversarial Attacks: Misleading Robotic Models via Adversarial Distillation
Ke Zhao
Huayang Huang
Miao Li
Yu Wu
AAML
219
2
0
21 Nov 2024
Map-Free Trajectory Prediction with Map Distillation and Hierarchical Encoding
Xiaodong Liu
Yucheng Xing
Xin Wang
247
2
0
17 Nov 2024
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
He-Yen Hsieh
Ziyun Li
Sai Qian Zhang
W. Ting
Kao-Den Chang
B. D. Salvo
Chiao Liu
H. T. Kung
VGen
246
1
0
07 Nov 2024
Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning
Neural Information Processing Systems (NeurIPS), 2024
Mingxing Li
Jinjie Wei
Yongxu Liu
Shunli Wang
Jiawei Chen
...
Xiaolu Hou
Mingyang Sun
Ziyun Qian
Dongliang Kou
Li Zhang
298
11
0
05 Nov 2024
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment
IEEE Transactions on Artificial Intelligence (IEEE TAI), 2024
Chengting Yu
Fengzhao Zhang
Ruizhe Chen
Zuozhu Liu
Shurun Tan
Er-ping Li
Aili Wang
305
5
0
03 Nov 2024
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models
North American Chapter of the Association for Computational Linguistics (NAACL), 2024
Jahyun Koo
Yerin Hwang
Yongil Kim
Taegwan Kang
Hyunkyung Bae
Kyomin Jung
365
1
0
25 Oct 2024
Preview-based Category Contrastive Learning for Knowledge Distillation
Muhe Ding
Yue Yu
Xue Dong
Xiaojie Li
Pengda Qin
Tian Gan
Liqiang Nie
VLM
210
0
0
18 Oct 2024
Distilling Invariant Representations with Dual Augmentation
Nikolaos Giakoumoglou
Tania Stathaki
263
0
0
12 Oct 2024
SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking Neural Networks
Haiyang Wang
Qian Zhu
Mowen She
Yabo Li
Haoyu Song
Minghe Xu
Xiao Wang
ViT
154
1
0
10 Oct 2024
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models
Neural Information Processing Systems (NeurIPS), 2024
Rui Zhao
Hangjie Yuan
Yujie Wei
Shiwei Zhang
Yuchao Gu
...
Xiang Wang
Zhangjie Wu
Junhao Zhang
Yingya Zhang
Mike Zheng Shou
DiffM
VLM
269
7
0
09 Oct 2024
Convex Distillation: Efficient Compression of Deep Networks via Convex Optimization
Prateek Varshney
Mert Pilanci
328
0
0
09 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
International Conference on Learning Representations (ICLR), 2024
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
218
0
0
05 Oct 2024
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Edan Kinderman
Itay Hubara
Haggai Maron
Daniel Soudry
MoMe
336
3
0
02 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
268
1
0
30 Sep 2024
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
European Conference on Computer Vision (ECCV), 2024
Yaomin Huang
Zaomin Yan
Yaxin Peng
Faming Fang
Guixu Zhang
247
0
0
27 Sep 2024
Towards Model-Agnostic Dataset Condensation by Heterogeneous Models
European Conference on Computer Vision (ECCV), 2024
Jun-Yeong Moon
Jung Uk Kim
Gyeong-Moon Park
DD
216
2
0
22 Sep 2024
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
Kangkai Zhang
Shiming Ge
Ruixin Shi
Dan Zeng
193
20
0
04 Sep 2024
Collaborative Learning for Enhanced Unsupervised Domain Adaptation
Minhee Cho
Hyesong Choi
Hayeon Jo
Dongbo Min
325
1
0
04 Sep 2024
UDD: Dataset Distillation via Mining Underutilized Regions
Chinese Conference on Pattern Recognition and Computer Vision (CPRCV), 2024
Shiguang Wang
Zhongyu Zhang
Jian Cheng
DD
246
0
0
29 Aug 2024
Knowledge Distillation with Refined Logits
Wujie Sun
Defang Chen
Siwei Lyu
Genlang Chen
Chun-Yen Chen
Can Wang
311
4
0
14 Aug 2024
UNIC: Universal Classification Models via Multi-teacher Distillation
European Conference on Computer Vision (ECCV), 2024
Mert Bulent Sariyildiz
Philippe Weinzaepfel
Thomas Lucas
Diane Larlus
Yannis Kalantidis
311
18
0
09 Aug 2024
DFMSD: Dual Feature Masking Stage-wise Knowledge Distillation for Object Detection
Zhourui Zhang
Jun Li
Zhijian Wu
Jifeng Shen
Jianhua Xu
141
0
0
18 Jul 2024
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
560
2
0
16 Jul 2024
A Survey on Symbolic Knowledge Distillation of Large Language Models
Kamal Acharya
Alvaro Velasquez
Haoze Song
SyDa
216
22
0
12 Jul 2024
CrowdTransfer: Enabling Crowd Knowledge Transfer in AIoT Community
Yan Liu
Bin Guo
Nuo Li
Yasan Ding
Zhouyangzi Zhang
Zhiwen Yu
311
5
0
09 Jul 2024
Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates
Shirley Kokane
M. R. Uddin
Min Xu
192
2
0
05 Jul 2024
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han
Qifan Wang
S. Dianat
Majid Rabbani
Raghuveer M. Rao
Yi Fang
Qiang Guan
Lifu Huang
Dongfang Liu
VLM
183
12
0
05 Jul 2024
PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation
Injoon Hwang
Haewon Park
Youngwan Lee
Jooyoung Yang
SunJae Maeng
AI4CE
171
4
0
13 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
298
3
0
12 Jun 2024
1
2
3
4
5
6
Next