ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.09682
  4. Cited By
Similarity-Preserving Knowledge Distillation

Similarity-Preserving Knowledge Distillation

23 July 2019
Frederick Tung
Greg Mori
ArXivPDFHTML

Papers citing "Similarity-Preserving Knowledge Distillation"

22 / 122 papers shown
Title
Distilling Object Detectors via Decoupled Features
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
24
199
0
26 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
16
38
0
20 Mar 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Learnable Boundary Guided Adversarial Training
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu-Lin Liu
Liwei Wang
Jiaya Jia
OOD
AAML
6
124
0
23 Nov 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
20
110
0
18 Sep 2020
Robust Re-Identification by Multiple Views Knowledge Distillation
Robust Re-Identification by Multiple Views Knowledge Distillation
Angelo Porrello
Luca Bergamini
Simone Calderara
14
65
0
08 Jul 2020
Multi-fidelity Neural Architecture Search with Knowledge Distillation
Multi-fidelity Neural Architecture Search with Knowledge Distillation
I. Trofimov
Nikita Klyuchnikov
Mikhail Salnikov
Alexander N. Filippov
Evgeny Burnaev
19
15
0
15 Jun 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
21
280
0
12 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
17
2,832
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
19
47
0
08 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
27
98
0
05 Jun 2020
Inter-Region Affinity Distillation for Road Marking Segmentation
Inter-Region Affinity Distillation for Road Marking Segmentation
Yuenan Hou
Zheng Ma
Chunxiao Liu
Tak-Wai Hui
Chen Change Loy
25
120
0
11 Apr 2020
COVID-MobileXpert: On-Device COVID-19 Patient Triage and Follow-up using
  Chest X-rays
COVID-MobileXpert: On-Device COVID-19 Patient Triage and Follow-up using Chest X-rays
Xin Li
Chengyin Li
D. Zhu
11
79
0
06 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
11
48
0
31 Mar 2020
UR2KiD: Unifying Retrieval, Keypoint Detection, and Keypoint Description
  without Local Correspondence Supervision
UR2KiD: Unifying Retrieval, Keypoint Detection, and Keypoint Description without Local Correspondence Supervision
Tsun-Yi Yang
Duy-Kien Nguyen
Huub Heijnen
Vassileios Balntas
3DV
15
37
0
20 Jan 2020
MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?
MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?
Joseph Bethge
Christian Bartz
Haojin Yang
Ying Chen
Christoph Meinel
MQ
18
91
0
16 Jan 2020
Preparing Lessons: Improve Knowledge Distillation with Better
  Supervision
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
21
67
0
18 Nov 2019
Knowledge Distillation for Incremental Learning in Semantic Segmentation
Knowledge Distillation for Incremental Learning in Semantic Segmentation
Umberto Michieli
Pietro Zanuttigh
CLL
VLM
13
98
0
08 Nov 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
14
1,029
0
23 Oct 2019
Extreme Low Resolution Activity Recognition with Confident
  Spatial-Temporal Attention Transfer
Extreme Low Resolution Activity Recognition with Confident Spatial-Temporal Attention Transfer
Yucai Bai
Qinglong Zou
Xieyuanli Chen
Lingxi Li
Zhengming Ding
Long Chen
11
3
0
09 Sep 2019
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
473
0
12 Jun 2018
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
88
515
0
09 Apr 2018
Previous
123