ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10699
  4. Cited By
Contrastive Representation Distillation

Contrastive Representation Distillation

23 October 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
ArXivPDFHTML

Papers citing "Contrastive Representation Distillation"

50 / 611 papers shown
Title
Improving Knowledge Distillation via Regularizing Feature Norm and
  Direction
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
21
19
0
26 May 2023
Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Jannik Kossen
Mark Collier
Basil Mustafa
Xiao Wang
Xiaohua Zhai
Lucas Beyer
Andreas Steiner
Jesse Berent
Rodolphe Jenatton
Efi Kokiopoulou
VLM
37
11
0
26 May 2023
Triplet Knowledge Distillation
Triplet Knowledge Distillation
Xijun Wang
Dongyang Liu
Meina Kan
Chunrui Han
Zhongqin Wu
Shiguang Shan
21
3
0
25 May 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
  Small Scale to Large Scale
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
30
16
0
25 May 2023
On the Impact of Knowledge Distillation for Model Interpretability
On the Impact of Knowledge Distillation for Model Interpretability
Hyeongrok Han
Siwon Kim
Hyun-Soo Choi
Sungroh Yoon
10
4
0
25 May 2023
Knowledge Diffusion for Distillation
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
29
50
0
25 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
34
38
0
23 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
41
68
0
23 May 2023
Revisiting Data Augmentation in Model Compression: An Empirical and
  Comprehensive Study
Revisiting Data Augmentation in Model Compression: An Empirical and Comprehensive Study
Muzhou Yu
Linfeng Zhang
Kaisheng Ma
15
2
0
22 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
29
19
0
22 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
18
17
0
18 May 2023
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy
  Correction-Based Distillation for Gap Optimizing
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing
Songling Zhu
Ronghua Shang
Bo Yuan
Weitong Zhang
Yangyang Li
Licheng Jiao
16
7
0
09 May 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge
  Distillation
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
37
18
0
08 May 2023
Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with
  Uncertainty
Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with Uncertainty
Yuan Zhang
Weihua Chen
Yichen Lu
Tao Huang
Xiuyu Sun
Jian Cao
50
8
0
04 May 2023
MolKD: Distilling Cross-Modal Knowledge in Chemical Reactions for
  Molecular Property Prediction
MolKD: Distilling Cross-Modal Knowledge in Chemical Reactions for Molecular Property Prediction
Liang Zeng
Lanqing Li
Jian Li
45
3
0
03 May 2023
On Uni-Modal Feature Learning in Supervised Multi-Modal Learning
On Uni-Modal Feature Learning in Supervised Multi-Modal Learning
Chenzhuang Du
Jiaye Teng
Tingle Li
Yichen Liu
Tianyuan Yuan
Yue Wang
Yang Yuan
Hang Zhao
72
38
0
02 May 2023
Long-Tailed Recognition by Mutual Information Maximization between
  Latent Features and Ground-Truth Labels
Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels
Min-Kook Suh
Seung-Woo Seo
SSL
21
14
0
02 May 2023
File Fragment Classification using Light-Weight Convolutional Neural
  Networks
File Fragment Classification using Light-Weight Convolutional Neural Networks
Mustafa Ghaleb
K. Saaim
Muhamad Felemban
S. Al-Saleh
Ahmad S. Al-Mulhem
17
1
0
01 May 2023
Class Attention Transfer Based Knowledge Distillation
Class Attention Transfer Based Knowledge Distillation
Ziyao Guo
Haonan Yan
Hui Li
Xiao-La Lin
11
61
0
25 Apr 2023
Improving Knowledge Distillation via Transferring Learning Ability
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
6
1
0
24 Apr 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
eTag: Class-Incremental Learning with Embedding Distillation and
  Task-Oriented Generation
eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation
Libo Huang
Yan Zeng
Chuanguang Yang
Zhulin An
Boyu Diao
Yongjun Xu
CLL
14
2
0
20 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
10
1
0
19 Apr 2023
Deep Collective Knowledge Distillation
Deep Collective Knowledge Distillation
Jihyeon Seo
Kyusam Oh
Chanho Min
Yongkeun Yun
Sungwoo Cho
11
0
0
18 Apr 2023
Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Wenke Xia
Xingjian Li
Andong Deng
Haoyi Xiong
Dejing Dou
Di Hu
11
4
0
16 Apr 2023
Teacher Network Calibration Improves Cross-Quality Knowledge
  Distillation
Teacher Network Calibration Improves Cross-Quality Knowledge Distillation
Pia Cuk
Robin Senge
M. Lauri
Simone Frintrop
13
1
0
15 Apr 2023
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual
  Representation Learning
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning
Kaiyou Song
Jin Xie
Shanyi Zhang
Zimeng Luo
24
29
0
13 Apr 2023
Homogenizing Non-IID datasets via In-Distribution Knowledge Distillation
  for Decentralized Learning
Homogenizing Non-IID datasets via In-Distribution Knowledge Distillation for Decentralized Learning
Deepak Ravikumar
Gobinda Saha
Sai Aparna Aketi
Kaushik Roy
11
1
0
09 Apr 2023
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Shoukai Xu
Jiangchao Yao
Ran Luo
Shuhai Zhang
Zihao Lian
Mingkui Tan
Bo Han
Yaowei Wang
19
6
0
05 Apr 2023
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with
  Knowledge Excavation
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation
Yang Jin
Mengke Li
Yang Lu
Y. Cheung
Hanzi Wang
35
21
0
03 Apr 2023
Decomposed Cross-modal Distillation for RGB-based Temporal Action
  Detection
Decomposed Cross-modal Distillation for RGB-based Temporal Action Detection
Pilhyeon Lee
Taeoh Kim
Minho Shim
Dongyoon Wee
H. Byun
24
11
0
30 Mar 2023
Information-Theoretic GAN Compression with Variational Energy-based
  Model
Information-Theoretic GAN Compression with Variational Energy-based Model
Minsoo Kang
Hyewon Yoo
Eunhee Kang
Sehwan Ki
Hyong-Euk Lee
Bohyung Han
GAN
18
3
0
28 Mar 2023
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Y. Cheng
Yichao Yan
Wenhan Zhu
Ye Pan
Bowen Pan
Xiaokang Yang
3DH
23
3
0
28 Mar 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
33
56
0
28 Mar 2023
Prototype-Sample Relation Distillation: Towards Replay-Free Continual
  Learning
Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning
Nader Asadi
Mohammad Davar
Sudhir Mudur
Rahaf Aljundi
Eugene Belilovsky
CLL
34
35
0
26 Mar 2023
Disentangling Writer and Character Styles for Handwriting Generation
Disentangling Writer and Character Styles for Handwriting Generation
Gang Dai
Yifan Zhang
Qingfeng Wang
Qing Du
Zhu Liang Yu
Zhuoman Liu
Shuangping Huang
32
21
0
26 Mar 2023
A Simple and Generic Framework for Feature Distillation via Channel-wise
  Transformation
A Simple and Generic Framework for Feature Distillation via Channel-wise Transformation
Ziwei Liu
Yongtao Wang
Xiaojie Chu
24
5
0
23 Mar 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified
  Approach with Normalized Loss and Customized Soft Labels
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
21
72
0
23 Mar 2023
FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation
  Models
FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation Models
Jianglong Ye
Naiyan Wang
X. Wang
DiffM
40
41
0
22 Mar 2023
Understanding the Role of the Projector in Knowledge Distillation
Understanding the Role of the Projector in Knowledge Distillation
Roy Miles
K. Mikolajczyk
14
21
0
20 Mar 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient
  Image Retrieval
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
13
13
0
16 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
51
2
0
15 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELM
MoE
16
1
0
14 Mar 2023
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning
  meets Knowledge Distillation
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation
Roy Miles
M. K. Yucel
Bruno Manganelli
Albert Saà-Garriga
VOS
38
24
0
14 Mar 2023
Data-Free Sketch-Based Image Retrieval
Data-Free Sketch-Based Image Retrieval
Abhra Chaudhuri
A. Bhunia
Yi-Zhe Song
Anjan Dutta
40
7
0
14 Mar 2023
A Contrastive Knowledge Transfer Framework for Model Compression and
  Transfer Learning
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Kaiqi Zhao
Yitao Chen
Ming Zhao
VLM
16
2
0
14 Mar 2023
Learn More for Food Recognition via Progressive Self-Distillation
Learn More for Food Recognition via Progressive Self-Distillation
Yaohui Zhu
Linhu Liu
Jiang Tian
17
5
0
09 Mar 2023
Leveraging Angular Distributions for Improved Knowledge Distillation
Leveraging Angular Distributions for Improved Knowledge Distillation
Eunyeong Jeon
Hongjun Choi
Ankita Shukla
P. Turaga
6
7
0
27 Feb 2023
LightTS: Lightweight Time Series Classification with Adaptive Ensemble
  Distillation -- Extended Version
LightTS: Lightweight Time Series Classification with Adaptive Ensemble Distillation -- Extended Version
David Campos
Miao Zhang
B. Yang
Tung Kieu
Chenjuan Guo
Christian S. Jensen
AI4TS
40
46
0
24 Feb 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
27
2
0
22 Feb 2023
Previous
123456...111213
Next