ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.03233
  4. Cited By
Knowledge Transfer via Distillation of Activation Boundaries Formed by
  Hidden Neurons
v1v2 (latest)

Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

AAAI Conference on Artificial Intelligence (AAAI), 2018
8 November 2018
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
ArXiv (abs)PDFHTML

Papers citing "Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons"

50 / 264 papers shown
Toward Student-Oriented Teacher Network Training For Knowledge
  Distillation
Toward Student-Oriented Teacher Network Training For Knowledge DistillationInternational Conference on Learning Representations (ICLR), 2022
Chengyu Dong
Liyuan Liu
Jingbo Shang
267
9
0
14 Jun 2022
ORC: Network Group-based Knowledge Distillation using Online Role Change
ORC: Network Group-based Knowledge Distillation using Online Role ChangeIEEE International Conference on Computer Vision (ICCV), 2022
Jun-woo Choi
Hyeon Cho
Seockhwa Jeong
Wonjun Hwang
119
5
0
01 Jun 2022
Towards Efficient 3D Object Detection with Knowledge Distillation
Towards Efficient 3D Object Detection with Knowledge DistillationNeural Information Processing Systems (NeurIPS), 2022
Jihan Yang
Shaoshuai Shi
Runyu Ding
Zhe Wang
Xiaojuan Qi
358
63
0
30 May 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Parameter-Efficient and Student-Friendly Knowledge DistillationIEEE transactions on multimedia (IEEE TMM), 2022
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
237
68
0
28 May 2022
Fast Object Placement Assessment
Fast Object Placement Assessment
Li Niu
Qingyang Liu
Zhenchen Liu
Jiangtong Li
165
19
0
28 May 2022
Knowledge Distillation from A Stronger Teacher
Knowledge Distillation from A Stronger TeacherNeural Information Processing Systems (NeurIPS), 2022
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
330
350
0
21 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised LearningInternational Journal of Computer Vision (IJCV), 2022
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
234
12
0
13 May 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge DistillationIEEE Transactions on Image Processing (IEEE TIP), 2022
Mingli Song
Pengcheng Chen
Jingwen Ye
Weilong Dai
154
88
0
05 May 2022
Attention-based Knowledge Distillation in Multi-attention Tasks: The
  Impact of a DCT-driven Loss
Attention-based Knowledge Distillation in Multi-attention Tasks: The Impact of a DCT-driven Loss
Alejandro López-Cifuentes
Marcos Escudero-Viñolo
Jesús Bescós
Juan C. Sanmiguel
217
1
0
04 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision TransformersComputer Vision and Pattern Recognition (CVPR), 2022
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
240
101
0
27 Apr 2022
Proto2Proto: Can you recognize the car, the way I do?
Proto2Proto: Can you recognize the car, the way I do?Computer Vision and Pattern Recognition (CVPR), 2022
Monish Keswani
Sriranjani Ramakrishnan
Nishant Reddy
V. Balasubramanian
319
32
0
25 Apr 2022
R2L: Distilling Neural Radiance Field to Neural Light Field for
  Efficient Novel View Synthesis
R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View SynthesisEuropean Conference on Computer Vision (ECCV), 2022
Huan Wang
Jian Ren
Zeng Huang
Kyle Olszewski
Menglei Chai
Yun Fu
Sergey Tulyakov
241
93
0
31 Mar 2022
Knowledge Distillation with the Reused Teacher Classifier
Knowledge Distillation with the Reused Teacher ClassifierComputer Vision and Pattern Recognition (CVPR), 2022
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
347
227
0
26 Mar 2022
A Closer Look at Knowledge Distillation with Features, Logits, and
  Gradients
A Closer Look at Knowledge Distillation with Features, Logits, and Gradients
Yen-Chang Hsu
James Smith
Yilin Shen
Z. Kira
Hongxia Jin
123
10
0
18 Mar 2022
Learning Affordance Grounding from Exocentric Images
Learning Affordance Grounding from Exocentric ImagesComputer Vision and Pattern Recognition (CVPR), 2022
Hongcheng Luo
Wei Zhai
Jing Zhang
Yang Cao
Dacheng Tao
231
96
0
18 Mar 2022
Decoupled Knowledge Distillation
Decoupled Knowledge DistillationComputer Vision and Pattern Recognition (CVPR), 2022
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
442
738
0
16 Mar 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence,
  Higher Data-efficiency, and Better Transferability
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better TransferabilityComputer Vision and Pattern Recognition (CVPR), 2022
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
243
47
0
10 Mar 2022
Bridging the Gap Between Patient-specific and Patient-independent
  Seizure Prediction via Knowledge Distillation
Bridging the Gap Between Patient-specific and Patient-independent Seizure Prediction via Knowledge DistillationJournal of Neural Engineering (J. Neural Eng.), 2022
Di Wu
Jie Yang
Mohamad Sawan
FedML
189
28
0
25 Feb 2022
Nonlinear Initialization Methods for Low-Rank Neural Networks
Nonlinear Initialization Methods for Low-Rank Neural Networks
Kiran Vodrahalli
Rakesh Shivanna
M. Sathiamoorthy
Sagar Jain
Ed H. Chi
238
4
0
02 Feb 2022
Adaptive Instance Distillation for Object Detection in Autonomous
  Driving
Adaptive Instance Distillation for Object Detection in Autonomous DrivingInternational Conference on Pattern Recognition (ICPR), 2022
Qizhen Lan
Qing Tian
249
8
0
26 Jan 2022
It's All in the Head: Representation Knowledge Distillation through
  Classifier Sharing
It's All in the Head: Representation Knowledge Distillation through Classifier Sharing
Emanuel Ben-Baruch
M. Karklinsky
Yossi Biton
Avi Ben-Cohen
Hussam Lawen
Nadav Zamir
298
14
0
18 Jan 2022
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing
Egeria: Efficient DNN Training with Knowledge-Guided Layer FreezingEuropean Conference on Computer Systems (EuroSys), 2022
Yiding Wang
D. Sun
Kai Chen
Fan Lai
Mosharaf Chowdhury
283
58
0
17 Jan 2022
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised
  Knowledge Distillation
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
135
21
0
13 Jan 2022
Pixel Distillation: A New Knowledge Distillation Scheme for
  Low-Resolution Image Recognition
Pixel Distillation: A New Knowledge Distillation Scheme for Low-Resolution Image Recognition
Guangyu Guo
Dingwen Zhang
Longfei Han
Nian Liu
Ming-Ming Cheng
Junwei Han
230
2
0
17 Dec 2021
Information Theoretic Representation Distillation
Information Theoretic Representation Distillation
Roy Miles
Adrian Lopez-Rodriguez
K. Mikolajczyk
MQ
348
26
0
01 Dec 2021
Local-Selective Feature Distillation for Single Image Super-Resolution
Local-Selective Feature Distillation for Single Image Super-Resolution
Seonguk Park
Nojun Kwak
171
10
0
22 Nov 2021
Learning Interpretation with Explainable Knowledge Distillation
Learning Interpretation with Explainable Knowledge Distillation
Raed Alharbi
Minh Nhat Vu
My T. Thai
147
22
0
12 Nov 2021
Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI
Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AIIEEE Transactions on Knowledge and Data Engineering (TKDE), 2021
Jiangchao Yao
Shengyu Zhang
Yang Yao
Feng Wang
Jianxin Ma
...
Kun Kuang
Chao-Xiang Wu
Leilei Gan
Jingren Zhou
Hongxia Yang
388
139
0
11 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel MapsNeural Information Processing Systems (NeurIPS), 2021
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
204
20
0
09 Nov 2021
A Survey on Green Deep Learning
A Survey on Green Deep Learning
Jingjing Xu
Wangchunshu Zhou
Zhiyi Fu
Hao Zhou
Lei Li
VLM
457
93
0
08 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge
  Distillation of CTC Models
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
479
1
0
05 Nov 2021
Distilling Object Detectors with Feature Richness
Distilling Object Detectors with Feature RichnessNeural Information Processing Systems (NeurIPS), 2021
Zhixing Du
Rui Zhang
Ming-Fang Chang
Xishan Zhang
Shaoli Liu
Tianshi Chen
Yunji Chen
ObjD
216
90
0
01 Nov 2021
A Variational Bayesian Approach to Learning Latent Variables for
  Acoustic Knowledge Transfer
A Variational Bayesian Approach to Learning Latent Variables for Acoustic Knowledge Transfer
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
BDL
183
6
0
16 Oct 2021
NEWRON: A New Generalization of the Artificial Neuron to Enhance the
  Interpretability of Neural Networks
NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks
F. Siciliano
Maria Sofia Bucarelli
Gabriele Tolomei
Fabrizio Silvestri
GNNAI4CE
103
7
0
05 Oct 2021
Prune Your Model Before Distill It
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
347
38
0
30 Sep 2021
Towards Communication-Efficient and Privacy-Preserving Federated
  Representation Learning
Towards Communication-Efficient and Privacy-Preserving Federated Representation Learning
Haizhou Shi
Youcai Zhang
Zijin Shen
Siliang Tang
Yaqian Li
Yandong Guo
Yueting Zhuang
108
7
0
29 Sep 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented
  Distribution
Knowledge Distillation Using Hierarchical Self-Supervision Augmented DistributionIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
260
23
0
07 Sep 2021
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Yue Liu
Xinyang Jiang
Donglin Bai
Yuge Zhang
Ningxin Zheng
Xuanyi Dong
Lu Liu
Yuqing Yang
Dongsheng Li
130
11
0
30 Aug 2021
Lipschitz Continuity Guided Knowledge Distillation
Lipschitz Continuity Guided Knowledge DistillationIEEE International Conference on Computer Vision (ICCV), 2021
Yuzhang Shang
Bin Duan
Ziliang Zong
Liqiang Nie
Yan Yan
200
30
0
29 Aug 2021
Multi-granularity for knowledge distillation
Multi-granularity for knowledge distillation
Baitan Shao
Ying Chen
123
4
0
15 Aug 2021
Learning from Matured Dumb Teacher for Fine Generalization
Learning from Matured Dumb Teacher for Fine Generalization
Heeseung Jung
Kangil Kim
Hoyong Kim
Jong-Hun Shin
173
2
0
12 Aug 2021
Semi-Supervised Domain Generalizable Person Re-Identification
Semi-Supervised Domain Generalizable Person Re-Identification
Lingxiao He
Wu Liu
Jian Liang
Kecheng Zheng
Xingyu Liao
Peng Cheng
Tao Mei
OOD
149
19
0
11 Aug 2021
Hierarchical Self-supervised Augmented Knowledge Distillation
Hierarchical Self-supervised Augmented Knowledge DistillationInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
SSL
292
91
0
29 Jul 2021
Double Similarity Distillation for Semantic Image Segmentation
Double Similarity Distillation for Semantic Image SegmentationIEEE Transactions on Image Processing (TIP), 2021
Yingchao Feng
Xian Sun
Wenhui Diao
Jihao Li
Xin Gao
146
70
0
19 Jul 2021
WeClick: Weakly-Supervised Video Semantic Segmentation with Click
  Annotations
WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations
Peidong Liu
Zibin He
Xiyu Yan
Yong Jiang
Shutao Xia
Feng Zheng
Maowei Hu
204
12
0
07 Jul 2021
Confidence Conditioned Knowledge Distillation
Confidence Conditioned Knowledge Distillation
Sourav Mishra
Suresh Sundaram
139
2
0
06 Jul 2021
Revisiting Knowledge Distillation: An Inheritance and Exploration
  Framework
Revisiting Knowledge Distillation: An Inheritance and Exploration FrameworkComputer Vision and Pattern Recognition (CVPR), 2021
Zhen Huang
Xu Shen
Jun Xing
Tongliang Liu
Xinmei Tian
Houqiang Li
Bing Deng
Jianqiang Huang
Xiansheng Hua
139
36
0
01 Jul 2021
Does Knowledge Distillation Really Work?
Does Knowledge Distillation Really Work?Neural Information Processing Systems (NeurIPS), 2021
Samuel Stanton
Pavel Izmailov
Polina Kirichenko
Alexander A. Alemi
A. Wilson
FedML
332
256
0
10 Jun 2021
BERT Learns to Teach: Knowledge Distillation with Meta Learning
BERT Learns to Teach: Knowledge Distillation with Meta LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Wangchunshu Zhou
Canwen Xu
Julian McAuley
314
104
0
08 Jun 2021
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box ModelInternational Conference on Machine Learning (ICML), 2021
Zehao Wang
149
50
0
07 Jun 2021
Previous
123456
Next