ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.03233
  4. Cited By
Knowledge Transfer via Distillation of Activation Boundaries Formed by
  Hidden Neurons
v1v2 (latest)

Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

AAAI Conference on Artificial Intelligence (AAAI), 2018
8 November 2018
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
ArXiv (abs)PDFHTML

Papers citing "Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons"

50 / 264 papers shown
BPKD: Boundary Privileged Knowledge Distillation For Semantic
  Segmentation
BPKD: Boundary Privileged Knowledge Distillation For Semantic SegmentationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Liyang Liu
Zihan Wang
M. Phan
Bowen Zhang
Jinchao Ge
Yifan Liu
309
21
0
13 Jun 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
Are Large Kernels Better Teachers than Transformers for ConvNets?International Conference on Machine Learning (ICML), 2023
Tianjin Huang
Lu Yin
Zhenyu Zhang
Lijuan Shen
Meng Fang
Mykola Pechenizkiy
Zinan Lin
Shiwei Liu
219
16
0
30 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence LossNeural Information Processing Systems (NeurIPS), 2023
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
271
74
0
23 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation MatchingInternational Conference on Learning Representations (ICLR), 2023
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
196
90
0
23 May 2023
EnSiam: Self-Supervised Learning With Ensemble Representations
EnSiam: Self-Supervised Learning With Ensemble Representations
Kai Han
Minsik Lee
SSL
262
0
0
22 May 2023
Revisiting Data Augmentation in Model Compression: An Empirical and
  Comprehensive Study
Revisiting Data Augmentation in Model Compression: An Empirical and Comprehensive StudyIEEE International Joint Conference on Neural Network (IJCNN), 2023
Muzhou Yu
Linfeng Zhang
Kaisheng Ma
213
2
0
22 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge DistillationKnowledge-Based Systems (KBS), 2023
Mengyang Yuan
Bo Lang
Fengnan Quan
231
29
0
18 May 2023
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy
  Correction-Based Distillation for Gap Optimizing
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap OptimizingPattern Recognition (Pattern Recogn.), 2023
Songling Zhu
Ronghua Shang
Bo Yuan
Weitong Zhang
Yangyang Li
Licheng Jiao
144
10
0
09 May 2023
Leveraging Synthetic Targets for Machine Translation
Leveraging Synthetic Targets for Machine TranslationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Sarthak Mittal
Oleksii Hrinchuk
Oleksii Kuchaiev
147
2
0
07 May 2023
Stimulative Training++: Go Beyond The Performance Limits of Residual
  Networks
Stimulative Training++: Go Beyond The Performance Limits of Residual NetworksIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
XinYu Piao
Tong He
DoangJoo Synn
Baopu Li
Tao Chen
Mengwei He
Jong-Kook Kim
190
7
0
04 May 2023
Improving Knowledge Distillation via Transferring Learning Ability
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
163
1
0
24 Apr 2023
LiDAR2Map: In Defense of LiDAR-Based Semantic Map Construction Using
  Online Camera Distillation
LiDAR2Map: In Defense of LiDAR-Based Semantic Map Construction Using Online Camera DistillationComputer Vision and Pattern Recognition (CVPR), 2023
Song Wang
Wentong Li
Wenyu Liu
Xiaolu Liu
Jianke Zhu
310
23
0
22 Apr 2023
eTag: Class-Incremental Learning with Embedding Distillation and
  Task-Oriented Generation
eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation
Libo Huang
Yan Zeng
Chuanguang Yang
Zhulin An
Boyu Diao
Yongjun Xu
CLL
176
3
0
20 Apr 2023
Label-guided Attention Distillation for Lane Segmentation
Label-guided Attention Distillation for Lane SegmentationNeurocomputing (Neurocomputing), 2023
Zhikang Liu
Lanyun Zhu
157
20
0
04 Apr 2023
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with
  Knowledge Excavation
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge ExcavationComputer Vision and Pattern Recognition (CVPR), 2023
Yang Jin
Mengke Li
Yang Lu
Yiu-ming Cheung
Hanzi Wang
259
46
0
03 Apr 2023
CAMEL: Communicative Agents for "Mind" Exploration of Large Language
  Model Society
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model SocietyNeural Information Processing Systems (NeurIPS), 2023
Ge Li
Hasan Hammoud
Hani Itani
Dmitrii Khizbullin
Guohao Li
SyDaALM
578
977
0
31 Mar 2023
DAMO-StreamNet: Optimizing Streaming Perception in Autonomous Driving
DAMO-StreamNet: Optimizing Streaming Perception in Autonomous DrivingInternational Joint Conference on Artificial Intelligence (IJCAI), 2023
Ju He
Zhi-Qi Cheng
Chenyang Li
Wangmeng Xiang
Binghui Chen
Bin Luo
Yifeng Geng
Xuansong Xie
AI4CE
301
26
0
30 Mar 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
DisWOT: Student Architecture Search for Distillation WithOut TrainingComputer Vision and Pattern Recognition (CVPR), 2023
Peijie Dong
Lujun Li
Zimian Wei
190
77
0
28 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELMMoE
132
1
0
14 Mar 2023
A Contrastive Knowledge Transfer Framework for Model Compression and
  Transfer Learning
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer LearningIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Kaiqi Zhao
Yitao Chen
Ming Zhao
VLM
196
2
0
14 Mar 2023
Learn More for Food Recognition via Progressive Self-Distillation
Learn More for Food Recognition via Progressive Self-DistillationAAAI Conference on Artificial Intelligence (AAAI), 2023
Yaohui Zhu
Linhu Liu
Jiang Tian
151
8
0
09 Mar 2023
Audio Representation Learning by Distilling Video as Privileged
  Information
Audio Representation Learning by Distilling Video as Privileged InformationIEEE Transactions on Artificial Intelligence (IEEE TAI), 2023
Amirhossein Hajavi
Ali Etemad
184
7
0
06 Feb 2023
Knowledge Distillation on Graphs: A Survey
Knowledge Distillation on Graphs: A SurveyACM Computing Surveys (ACM Comput. Surv.), 2023
Yijun Tian
Shichao Pei
Xiangliang Zhang
Chuxu Zhang
Nitesh Chawla
263
60
0
01 Feb 2023
StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D
  Object Detection
StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object DetectionAAAI Conference on Artificial Intelligence (AAAI), 2023
Yanfeng Guo
Xiaoqing Ye
Xiao Tan
Errui Ding
Xiang Bai
3DPC
172
13
0
04 Jan 2023
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
TinyMIM: An Empirical Study of Distilling MIM Pre-trained ModelsComputer Vision and Pattern Recognition (CVPR), 2023
Sucheng Ren
Fangyun Wei
Zheng Zhang
Han Hu
321
51
0
03 Jan 2023
Gait Recognition Using 3-D Human Body Shape Inference
Gait Recognition Using 3-D Human Body Shape InferenceIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Haidong Zhu
Zhao-Heng Zheng
Ramkant Nevatia
CVBM3DH
133
27
0
18 Dec 2022
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with
  Similarity-Keeping Knowledge Distillation
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with Similarity-Keeping Knowledge Distillation
Xin Huang
Sung-Yu Chen
Chun-Shu Wei
215
0
0
06 Dec 2022
Adaptive Attention Link-based Regularization for Vision Transformers
Adaptive Attention Link-based Regularization for Vision TransformersAdvanced Video and Signal Based Surveillance (AVSS), 2022
Heegon Jin
Jongwon Choi
ViT
220
0
0
25 Nov 2022
An Interpretable Neuron Embedding for Static Knowledge Distillation
An Interpretable Neuron Embedding for Static Knowledge Distillation
Wei Han
Yang Wang
Christian Böhm
Junming Shao
130
0
0
14 Nov 2022
Understanding the Role of Mixup in Knowledge Distillation: An Empirical
  Study
Understanding the Role of Mixup in Knowledge Distillation: An Empirical StudyIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Hongjun Choi
Eunyeong Jeon
Ankita Shukla
Pavan Turaga
191
9
0
08 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
280
41
0
28 Oct 2022
Multimodal Transformer Distillation for Audio-Visual Synchronization
Multimodal Transformer Distillation for Audio-Visual SynchronizationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Xuan-Bo Chen
Haibin Wu
Chung-Che Wang
Hung-yi Lee
J. Jang
155
6
0
27 Oct 2022
Collaborative Multi-Teacher Knowledge Distillation for Learning Low
  Bit-width Deep Neural Networks
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedMLMQ
184
27
0
27 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge DistillationNeural Information Processing Systems (NeurIPS), 2022
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
268
27
0
23 Oct 2022
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Few-Shot Learning of Compact Models via Task-Specific Meta DistillationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
266
9
0
18 Oct 2022
Distilling Object Detectors With Global Knowledge
Distilling Object Detectors With Global KnowledgeEuropean Conference on Computer Vision (ECCV), 2022
Sanli Tang
Zhongyu Zhang
Zhanzhan Cheng
Jing Lu
Yunlu Xu
Yi Niu
Fan He
252
11
0
17 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Asymmetric Temperature Scaling Makes Larger Networks Teach Well AgainNeural Information Processing Systems (NeurIPS), 2022
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
245
34
0
10 Oct 2022
Stimulative Training of Residual Networks: A Social Psychology
  Perspective of Loafing
Stimulative Training of Residual Networks: A Social Psychology Perspective of LoafingNeural Information Processing Systems (NeurIPS), 2022
Peng Ye
Shengji Tang
Baopu Li
Tao Chen
Wanli Ouyang
151
15
0
09 Oct 2022
CES-KD: Curriculum-based Expert Selection for Guided Knowledge
  Distillation
CES-KD: Curriculum-based Expert Selection for Guided Knowledge DistillationInternational Conference on Pattern Recognition (ICPR), 2022
Ibtihel Amara
M. Ziaeefard
B. Meyer
W. Gross
J. Clark
119
6
0
15 Sep 2022
Continual Learning for Pose-Agnostic Object Recognition in 3D Point
  Clouds
Continual Learning for Pose-Agnostic Object Recognition in 3D Point Clouds
Xihao Wang
Xian Wei
3DPC
177
5
0
11 Sep 2022
Generative Adversarial Super-Resolution at the Edge with Knowledge
  Distillation
Generative Adversarial Super-Resolution at the Edge with Knowledge DistillationEngineering applications of artificial intelligence (EAAI), 2022
Simone Angarano
Francesco Salvetti
Mauro Martini
Marcello Chiaberge
GAN
283
28
0
07 Sep 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Masked Autoencoders Enable Efficient Knowledge DistillersComputer Vision and Pattern Recognition (CVPR), 2022
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
285
57
0
25 Aug 2022
Task-Balanced Distillation for Object Detection
Task-Balanced Distillation for Object DetectionPattern Recognition (Pattern Recogn.), 2022
Ruining Tang
Zhen-yu Liu
Yangguang Li
Yiguo Song
Hui Liu
Qide Wang
Jing Shao
Guifang Duan
Jianrong Tan
188
25
0
05 Aug 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
279
76
0
23 Jul 2022
Locality Guidance for Improving Vision Transformers on Tiny Datasets
Locality Guidance for Improving Vision Transformers on Tiny DatasetsEuropean Conference on Computer Vision (ECCV), 2022
Kehan Li
Runyi Yu
Zhennan Wang
Li-ming Yuan
Guoli Song
Jie Chen
ViT
165
57
0
20 Jul 2022
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
HEAD: HEtero-Assists Distillation for Heterogeneous Object DetectorsEuropean Conference on Computer Vision (ECCV), 2022
Luting Wang
Xiaojie Li
Yue Liao
Jiang
Yue Yu
Haiwei Yang
Chao Qian
Si Liu
212
26
0
12 Jul 2022
Cross-Architecture Knowledge Distillation
Cross-Architecture Knowledge DistillationAsian Conference on Computer Vision (ACCV), 2022
Yufan Liu
Jiajiong Cao
Bing Li
Weiming Hu
Jin-Fei Ding
Liang Li
182
64
0
12 Jul 2022
PKD: General Distillation Framework for Object Detectors via Pearson
  Correlation Coefficient
PKD: General Distillation Framework for Object Detectors via Pearson Correlation CoefficientNeural Information Processing Systems (NeurIPS), 2022
Weihan Cao
Yifan Zhang
Jianfei Gao
Anda Cheng
Ke Cheng
Jian Cheng
260
95
0
05 Jul 2022
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and
  Faster Search
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search
Taehyeon Kim
Heesoo Myeong
Se-Young Yun
167
4
0
27 Jun 2022
Multi scale Feature Extraction and Fusion for Online Knowledge
  Distillation
Multi scale Feature Extraction and Fusion for Online Knowledge DistillationInternational Conference on Artificial Neural Networks (ICANN), 2022
Panpan Zou
Yinglei Teng
Tao Niu
136
8
0
16 Jun 2022
Previous
123456
Next
Page 3 of 6