ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.03233
  4. Cited By
Knowledge Transfer via Distillation of Activation Boundaries Formed by
  Hidden Neurons
v1v2 (latest)

Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

AAAI Conference on Artificial Intelligence (AAAI), 2018
8 November 2018
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
ArXiv (abs)PDFHTML

Papers citing "Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons"

14 / 264 papers shown
The State of Knowledge Distillation for Classification
The State of Knowledge Distillation for Classification
Fabian Ruffy
K. Chahal
203
21
0
20 Dec 2019
QUEST: Quantized embedding space for transferring knowledge
QUEST: Quantized embedding space for transferring knowledgeEuropean Conference on Computer Vision (ECCV), 2019
Himalaya Jain
Spyros Gidaris
N. Komodakis
P. Pérez
Matthieu Cord
205
14
0
03 Dec 2019
Preparing Lessons: Improve Knowledge Distillation with Better
  Supervision
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
375
78
0
18 Nov 2019
Collaborative Distillation for Top-N Recommendation
Collaborative Distillation for Top-N RecommendationIndustrial Conference on Data Mining (IDM), 2019
Jae-woong Lee
Minjin Choi
Jongwuk Lee
Hyunjung Shim
133
61
0
13 Nov 2019
Contrastive Representation Distillation
Contrastive Representation DistillationInternational Conference on Learning Representations (ICLR), 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
1.3K
1,209
0
23 Oct 2019
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network
  for Lightweight Face Recognition
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition
Mengjia Yan
Mengao Zhao
Zining Xu
Qian Zhang
Guoli Wang
Zhizhong Su
CVBM
317
101
0
11 Oct 2019
Ensemble Knowledge Distillation for Learning Improved and Efficient
  Networks
Ensemble Knowledge Distillation for Learning Improved and Efficient NetworksEuropean Conference on Artificial Intelligence (ECAI), 2019
Umar Asif
Jianbin Tang
S. Harrer
FedML
194
86
0
17 Sep 2019
Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and
  Noisy Data Refinement
Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement
Zhiqiang Shen
Zhankui He
Wanyun Cui
Jiahui Yu
Yutong Zheng
Chenchen Zhu
Marios Savvides
AAML
117
5
0
22 Aug 2019
Graph-based Knowledge Distillation by Multi-head Attention Network
Graph-based Knowledge Distillation by Multi-head Attention NetworkBritish Machine Vision Conference (BMVC), 2019
Seunghyun Lee
B. Song
172
86
0
04 Jul 2019
ShrinkTeaNet: Million-scale Lightweight Face Recognition via Shrinking
  Teacher-Student Networks
ShrinkTeaNet: Million-scale Lightweight Face Recognition via Shrinking Teacher-Student Networks
C. Duong
Khoa Luu
Kha Gia Quach
Ngan Le
CVBM
133
40
0
25 May 2019
Full-Gradient Representation for Neural Network Visualization
Full-Gradient Representation for Neural Network VisualizationNeural Information Processing Systems (NeurIPS), 2019
Suraj Srinivas
François Fleuret
MILMFAtt
499
325
0
02 May 2019
Feature Fusion for Online Mutual Knowledge Distillation
Feature Fusion for Online Mutual Knowledge Distillation
Jangho Kim
Minsung Hyun
Inseop Chung
Nojun Kwak
FedML
183
98
0
19 Apr 2019
A Comprehensive Overhaul of Feature Distillation
A Comprehensive Overhaul of Feature Distillation
Byeongho Heo
Jeesoo Kim
Sangdoo Yun
Hyojin Park
Nojun Kwak
J. Choi
343
684
0
03 Apr 2019
ExpandNets: Linear Over-parameterization to Train Compact Convolutional
  Networks
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
Shuxuan Guo
J. Álvarez
Mathieu Salzmann
480
87
0
26 Nov 2018
Previous
123456