ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07114
  4. Cited By
Knowledge Distillation Meets Self-Supervision

Knowledge Distillation Meets Self-Supervision

12 June 2020
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
    FedML
ArXivPDFHTML

Papers citing "Knowledge Distillation Meets Self-Supervision"

39 / 39 papers shown
Title
OmniBind: Teach to Build Unequal-Scale Modality Interaction for
  Omni-Bind of All
OmniBind: Teach to Build Unequal-Scale Modality Interaction for Omni-Bind of All
Yuanhuiyi Lyu
Xueye Zheng
Dahun Kim
Lin Wang
32
10
0
25 May 2024
Retro: Reusing teacher projection head for efficient embedding
  distillation on Lightweight Models via Self-supervised Learning
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen
Chae Jung Park
21
0
0
24 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
56
1
0
22 Apr 2024
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
19
0
0
26 Oct 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
16
16
0
08 Aug 2023
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Qihuang Zhong
Liang Ding
Juhua Liu
Xuebo Liu
Min Zhang
Bo Du
Dacheng Tao
VLM
27
9
0
24 May 2023
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Y. Cheng
Yichao Yan
Wenhan Zhu
Ye Pan
Bowen Pan
Xiaokang Yang
3DH
23
3
0
28 Mar 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
12
2
0
22 Feb 2023
Rethinking Soft Label in Label Distribution Learning Perspective
Rethinking Soft Label in Label Distribution Learning Perspective
Seungbum Hong
Jihun Yoon
Bogyu Park
Min-Kook Choi
13
0
0
31 Jan 2023
Hilbert Distillation for Cross-Dimensionality Networks
Hilbert Distillation for Cross-Dimensionality Networks
Dian Qin
Haishuai Wang
Zhe Liu
Hongjia Xu
Sheng Zhou
Jiajun Bu
19
4
0
08 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
14
35
0
28 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with
  Deep Supervision
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
9
3
0
25 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
19
23
0
23 Oct 2022
GCISG: Guided Causal Invariant Learning for Improved Syn-to-real
  Generalization
GCISG: Guided Causal Invariant Learning for Improved Syn-to-real Generalization
Gilhyun Nam
Gyeongjae Choi
Kyungmin Lee
OOD
9
4
0
22 Aug 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
22
50
0
23 Jul 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
27
7
0
13 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
13
60
0
30 Mar 2022
Knowledge Distillation with the Reused Teacher Classifier
Knowledge Distillation with the Reused Teacher Classifier
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
20
164
0
26 Mar 2022
SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
  Lightweight Skin Lesion Classification Using Dermoscopic Images
SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for Lightweight Skin Lesion Classification Using Dermoscopic Images
Yongwei Wang
Yuheng Wang
Tim K. Lee
C. Miao
Z. J. Wang
13
74
0
22 Mar 2022
Exploring Patch-wise Semantic Relation for Contrastive Learning in
  Image-to-Image Translation Tasks
Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks
Chanyong Jung
Gihyun Kwon
Jong Chul Ye
16
83
0
03 Mar 2022
Distillation with Contrast is All You Need for Self-Supervised Point
  Cloud Representation Learning
Distillation with Contrast is All You Need for Self-Supervised Point Cloud Representation Learning
Kexue Fu
Peng Gao
Renrui Zhang
Hongsheng Li
Yu Qiao
Manning Wang
SSL
3DPC
20
23
0
09 Feb 2022
Iterative Self Knowledge Distillation -- From Pothole Classification to
  Fine-Grained and COVID Recognition
Iterative Self Knowledge Distillation -- From Pothole Classification to Fine-Grained and COVID Recognition
Kuan-Chuan Peng
29
2
0
04 Feb 2022
Unsupervised Domain Adaptive Person Re-Identification via Human Learning
  Imitation
Unsupervised Domain Adaptive Person Re-Identification via Human Learning Imitation
Yang Peng
Ping Liu
Yawei Luo
Pan Zhou
Zichuan Xu
Jingen Liu
OOD
16
0
0
28 Nov 2021
EvDistill: Asynchronous Events to End-task Learning via Bidirectional
  Reconstruction-guided Cross-modal Knowledge Distillation
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation
Lin Wang
Yujeong Chae
Sung-Hoon Yoon
Tae-Kyun Kim
Kuk-Jin Yoon
20
63
0
24 Nov 2021
Learning Rich Nearest Neighbor Representations from Self-supervised
  Ensembles
Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles
Bram Wallace
Devansh Arpit
Huan Wang
Caiming Xiong
SSL
OOD
16
0
0
19 Oct 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented
  Distribution
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
22
15
0
07 Sep 2021
Neighborhood Consensus Contrastive Learning for Backward-Compatible
  Representation
Neighborhood Consensus Contrastive Learning for Backward-Compatible Representation
Shengsen Wu
Liang Chen
Yihang Lou
Yan Bai
Tao Bai
Minghua Deng
Ling-yu Duan
10
8
0
07 Aug 2021
Linking Common Vulnerabilities and Exposures to the MITRE ATT&CK
  Framework: A Self-Distillation Approach
Linking Common Vulnerabilities and Exposures to the MITRE ATT&CK Framework: A Self-Distillation Approach
Benjamin Ampel
Sagar Samtani
Steven Ullman
Hsinchun Chen
12
34
0
03 Aug 2021
Hierarchical Self-supervised Augmented Knowledge Distillation
Hierarchical Self-supervised Augmented Knowledge Distillation
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
SSL
13
75
0
29 Jul 2021
DisCo: Remedy Self-supervised Learning on Lightweight Models with
  Distilled Contrastive Learning
DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
Yuting Gao
Jia-Xin Zhuang
Xiaowei Guo
Hao Cheng
Xing Sun
Ke Li
Feiyue Huang
16
40
0
19 Apr 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
14
39
0
17 Dec 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
10
110
0
18 Sep 2020
A Unified Framework for Shot Type Classification Based on Subject
  Centric Lens
A Unified Framework for Shot Type Classification Based on Subject Centric Lens
Anyi Rao
Jiaze Wang
Linning Xu
Xuekun Jiang
Qingqiu Huang
Bolei Zhou
Dahua Lin
18
60
0
08 Aug 2020
Self-supervised Knowledge Distillation for Few-shot Learning
Self-supervised Knowledge Distillation for Few-shot Learning
Jathushan Rajasegaran
Salman Khan
Munawar Hayat
F. Khan
M. Shah
SSL
19
91
0
17 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
17
2,822
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
19
47
0
08 Jun 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
473
0
12 Jun 2018
Boosting Self-Supervised Learning via Knowledge Transfer
Boosting Self-Supervised Learning via Knowledge Transfer
M. Noroozi
Ananth Vinjimoor
Paolo Favaro
Hamed Pirsiavash
SSL
207
291
0
01 May 2018
1