ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10699
  4. Cited By
Contrastive Representation Distillation

Contrastive Representation Distillation

23 October 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
ArXivPDFHTML

Papers citing "Contrastive Representation Distillation"

50 / 611 papers shown
Title
LAKD-Activation Mapping Distillation Based on Local Learning
LAKD-Activation Mapping Distillation Based on Local Learning
Yaoze Zhang
Yuming Zhang
Yu Zhao
Yue Zhang
Feiyu Zhu
29
0
0
21 Aug 2024
Focus on Focus: Focus-oriented Representation Learning and Multi-view
  Cross-modal Alignment for Glioma Grading
Focus on Focus: Focus-oriented Representation Learning and Multi-view Cross-modal Alignment for Glioma Grading
Li Pan
Yupei Zhang
Qiushi Yang
Tan Li
Xiaohan Xing
Maximus C. F. Yeung
Zhen Chen
35
1
0
16 Aug 2024
Knowledge Distillation with Refined Logits
Knowledge Distillation with Refined Logits
Wujie Sun
Defang Chen
Siwei Lyu
Genlang Chen
Chun-Yen Chen
Can Wang
24
1
0
14 Aug 2024
Depth Helps: Improving Pre-trained RGB-based Policy with Depth
  Information Injection
Depth Helps: Improving Pre-trained RGB-based Policy with Depth Information Injection
Xincheng Pang
Wenke Xia
Zhigang Wang
Bin Zhao
Di Hu
Dong Wang
Xuelong Li
36
3
0
09 Aug 2024
UNIC: Universal Classification Models via Multi-teacher Distillation
UNIC: Universal Classification Models via Multi-teacher Distillation
Mert Bulent Sariyildiz
Philippe Weinzaepfel
Thomas Lucas
Diane Larlus
Yannis Kalantidis
29
6
0
09 Aug 2024
How to Train the Teacher Model for Effective Knowledge Distillation
How to Train the Teacher Model for Effective Knowledge Distillation
Shayan Mohajer Hamidi
Xizhen Deng
Renhao Tan
Linfeng Ye
Ahmed H. Salamah
27
2
0
25 Jul 2024
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
AAML
47
1
0
13 Jul 2024
A Survey on Symbolic Knowledge Distillation of Large Language Models
A Survey on Symbolic Knowledge Distillation of Large Language Models
Kamal Acharya
Alvaro Velasquez
H. Song
SyDa
29
4
0
12 Jul 2024
Reprogramming Distillation for Medical Foundation Models
Reprogramming Distillation for Medical Foundation Models
Yuhang Zhou
Siyuan Du
Haolin Li
Jiangchao Yao
Ya Zhang
Yanfeng Wang
41
2
0
09 Jul 2024
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han
Qifan Wang
S. Dianat
Majid Rabbani
Raghuveer M. Rao
Yi Fang
Qiang Guan
Lifu Huang
Dongfang Liu
VLM
33
4
0
05 Jul 2024
Adaptive Modality Balanced Online Knowledge Distillation for
  Brain-Eye-Computer based Dim Object Detection
Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer based Dim Object Detection
Zixing Li
Chao Yan
Zhen Lan
Xiaojia Xiang
Han Zhou
Jun Lai
Dengqing Tang
38
0
0
02 Jul 2024
Instance Temperature Knowledge Distillation
Instance Temperature Knowledge Distillation
Zhengbo Zhang
Yuxi Zhou
Jia Gong
Jun Liu
Zhigang Tu
19
2
0
27 Jun 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge
  Distillation
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
42
0
0
25 Jun 2024
Lightweight Model Pre-training via Language Guided Knowledge
  Distillation
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
36
0
0
17 Jun 2024
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Prashanth Vijayaraghavan
Hongzhi Wang
Luyao Shi
Tyler Baldwin
David Beymer
Ehsan Degan
29
1
0
16 Jun 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
84
10
0
15 Jun 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
16
0
0
12 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
41
2
0
12 Jun 2024
Heterogeneous Learning Rate Scheduling for Neural Architecture Search on
  Long-Tailed Datasets
Heterogeneous Learning Rate Scheduling for Neural Architecture Search on Long-Tailed Datasets
Chenxia Tang
18
1
0
11 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
91
1
0
06 Jun 2024
PLaD: Preference-based Large Language Model Distillation with
  Pseudo-Preference Pairs
PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Haorui Wang
Zhen Qin
Feng Han
Jialu Liu
Simon Baumgartner
Michael Bendersky
Chao Zhang
37
6
0
05 Jun 2024
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Jash Dalvi
Ali Dabouei
Gunjan Dhanuka
Min Xu
18
0
0
05 Jun 2024
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Erik Landolsi
Fredrik Kahl
DiffM
53
1
0
05 Jun 2024
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Tianren Zhang
Chujie Zhao
Guanyu Chen
Yizhou Jiang
Feng Chen
OOD
MLT
OODD
69
3
0
05 Jun 2024
Estimating Human Poses Across Datasets: A Unified Skeleton and
  Multi-Teacher Distillation Approach
Estimating Human Poses Across Datasets: A Unified Skeleton and Multi-Teacher Distillation Approach
Muhammad Gul Zain Ali Khan
Dhavalkumar Limbachiya
Didier Stricker
Muhammad Zeshan Afzal
3DH
21
0
0
30 May 2024
Dual sparse training framework: inducing activation map sparsity via
  Transformed $\ell1$ regularization
Dual sparse training framework: inducing activation map sparsity via Transformed ℓ1\ell1ℓ1 regularization
Xiaolong Yu
Cong Tian
38
0
0
30 May 2024
Aligning in a Compact Space: Contrastive Knowledge Distillation between
  Heterogeneous Architectures
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures
Hongjun Wu
Li Xiao
Xingkuo Zhang
Yining Miao
38
1
0
28 May 2024
Retro: Reusing teacher projection head for efficient embedding
  distillation on Lightweight Models via Self-supervised Learning
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen
Chae Jung Park
34
0
0
24 May 2024
Exploring Dark Knowledge under Various Teacher Capacities and Addressing
  Capacity Mismatch
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch
Xin-Chun Li
Wen-Shu Fan
Bowen Tao
Le Gan
De-Chuan Zhan
14
2
0
21 May 2024
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field
  Video Reconstruction
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction
Aryan Garg
Raghav Mallampali
Akshat Joshi
Shrisudhan Govindarajan
Kaushik Mitra
29
0
0
20 May 2024
Cross-Domain Knowledge Distillation for Low-Resolution Human Pose
  Estimation
Cross-Domain Knowledge Distillation for Low-Resolution Human Pose Estimation
Zejun Gu
Zhongming Zhao
Henghui Ding
Hao Shen
Zhao Zhang
De-Shuang Huang
32
0
0
19 May 2024
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model
  Stealing
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing
Yunlong Zhao
Xiaoheng Deng
Yijing Liu
Xin-jun Pei
Jiazhi Xia
Wei Chen
AAML
35
3
0
18 May 2024
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via
  Channels Relational Graph
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph
Zhiwei Wang
Jun Huang
Longhua Ma
Chengyu Wu
Hongyu Ma
35
0
0
14 May 2024
Self-Distillation Improves DNA Sequence Inference
Self-Distillation Improves DNA Sequence Inference
Tong Yu
Lei Cheng
Ruslan Khalitov
Erland Brandser Olsson
Zhirong Yang
SyDa
35
0
0
14 May 2024
Navigating the Future of Federated Recommendation Systems with Foundation Models
Navigating the Future of Federated Recommendation Systems with Foundation Models
Zhiwei Li
Guodong Long
Chunxu Zhang
Honglei Zhang
Jing Jiang
Chengqi Zhang
107
0
0
12 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Min-man Wu
Xiaoli Li
33
1
0
09 May 2024
DVMSR: Distillated Vision Mamba for Efficient Super-Resolution
DVMSR: Distillated Vision Mamba for Efficient Super-Resolution
Xiaoyan Lei
Wenlong Zhang
Weifeng Cao
27
11
0
05 May 2024
Low-Rank Knowledge Decomposition for Medical Foundation Models
Low-Rank Knowledge Decomposition for Medical Foundation Models
Yuhang Zhou
Haolin Li
Siyuan Du
Jiangchao Yao
Ya-Qin Zhang
Yanfeng Wang
23
3
0
26 Apr 2024
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment
  Analysis with Incomplete Modalities
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities
Mingcheng Li
Dingkang Yang
Xiao Zhao
Shuai Wang
Yan Wang
Kun Yang
Mingyang Sun
Dongliang Kou
Ziyun Qian
Lihua Zhang
33
8
0
25 Apr 2024
CNN2GNN: How to Bridge CNN with GNN
CNN2GNN: How to Bridge CNN with GNN
Ziheng Jiao
Hongyuan Zhang
Xuelong Li
11
1
0
23 Apr 2024
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language
  Learning
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Yanhui Guo
Shaoyuan Xu
Jinmiao Fu
Jia-Wei Liu
Chaosheng Dong
Bryan Wang
VLM
CLL
33
5
0
22 Apr 2024
Heterogeneous Face Recognition Using Domain Invariant Units
Heterogeneous Face Recognition Using Domain Invariant Units
Anjith George
S´ebastien Marcel
CVBM
16
6
0
22 Apr 2024
A Multimodal Feature Distillation with CNN-Transformer Network for Brain
  Tumor Segmentation with Incomplete Modalities
A Multimodal Feature Distillation with CNN-Transformer Network for Brain Tumor Segmentation with Incomplete Modalities
Ming Kang
F. F. Ting
Raphaël C.-W. Phan
Zongyuan Ge
Chee-Ming Ting
41
2
0
22 Apr 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
56
1
0
22 Apr 2024
Dynamic Temperature Knowledge Distillation
Dynamic Temperature Knowledge Distillation
Yukang Wei
Yu Bai
25
4
0
19 Apr 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
31
2
0
18 Apr 2024
Dynamic Self-adaptive Multiscale Distillation from Pre-trained
  Multimodal Large Model for Efficient Cross-modal Representation Learning
Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning
Zhengyang Liang
Meiyu Liang
Wei Huang
Yawen Li
Zhe Xue
21
1
0
16 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
32
0
0
04 Apr 2024
Improve Knowledge Distillation via Label Revision and Data Selection
Improve Knowledge Distillation via Label Revision and Data Selection
Weichao Lan
Yiu-ming Cheung
Qing Xu
Buhua Liu
Zhikai Hu
Mengke Li
Zhenghua Chen
30
2
0
03 Apr 2024
Federated Distillation: A Survey
Federated Distillation: A Survey
Lin Li
Jianping Gou
Baosheng Yu
Lan Du
Zhang Yiand Dacheng Tao
DD
FedML
51
4
0
02 Apr 2024
Previous
12345...111213
Next