ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10699
  4. Cited By
Contrastive Representation Distillation
v1v2v3 (latest)

Contrastive Representation Distillation

International Conference on Learning Representations (ICLR), 2019
23 October 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
ArXiv (abs)PDFHTMLGithub (2336★)

Papers citing "Contrastive Representation Distillation"

50 / 686 papers shown
Title
From Low-Rank Features to Encoding Mismatch: Rethinking Feature Distillation in Vision Transformers
From Low-Rank Features to Encoding Mismatch: Rethinking Feature Distillation in Vision Transformers
Huiyuan Tian
Bonan Xu
Shijian Li
Xin Jin
48
0
0
19 Nov 2025
Logit-Based Losses Limit the Effectiveness of Feature Knowledge Distillation
Logit-Based Losses Limit the Effectiveness of Feature Knowledge Distillation
Nicholas Cooper
Lijun Chen
Sailesh Dwivedy
Danna Gurari
68
0
0
18 Nov 2025
DGS-Net: Distillation-Guided Gradient Surgery for CLIP Fine-Tuning in AI-Generated Image Detection
DGS-Net: Distillation-Guided Gradient Surgery for CLIP Fine-Tuning in AI-Generated Image Detection
Jiazhen Yan
Ziqiang Li
Fan Wang
Boyu Wang
Zhangjie Fu
144
0
0
17 Nov 2025
Distillation Dynamics: Towards Understanding Feature-Based Distillation in Vision Transformers
Distillation Dynamics: Towards Understanding Feature-Based Distillation in Vision Transformers
Huiyuan Tian
Bonan Xu Shijian Li
Shijian Li
125
1
0
10 Nov 2025
Contrast-Guided Cross-Modal Distillation for Thermal Object Detection
Contrast-Guided Cross-Modal Distillation for Thermal Object Detection
SiWoo Kim
JhongHyun An
92
0
0
03 Nov 2025
Mitigating Semantic Collapse in Partially Relevant Video Retrieval
Mitigating Semantic Collapse in Partially Relevant Video Retrieval
WonJun Moon
MinSeok Jung
Gilhan Park
Tae-Young Kim
Cheol-Ho Cho
Woojin Jun
Jae-Pil Heo
100
0
0
31 Oct 2025
Distilling Multilingual Vision-Language Models: When Smaller Models Stay Multilingual
Distilling Multilingual Vision-Language Models: When Smaller Models Stay Multilingual
Sukrit Sriratanawilai
Jhayahgrit Thongwat
Romrawin Chumpu
Patomporn Payoungkhamdee
Sarana Nutanong
Peerat Limkonchotiwat
VLM
98
0
0
30 Oct 2025
UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations
UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations
Fengming Yu
Haiwei Pan
Kejia Zhang
Jian Guan
Haiying Jiang
117
0
0
28 Oct 2025
Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity
Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity
S. Yu
Dongjun Nam
Dina Katabi
Jeany Son
104
0
0
26 Oct 2025
Revisiting Knowledge Distillation: The Hidden Role of Dataset Size
Revisiting Knowledge Distillation: The Hidden Role of Dataset Size
Giulia Lanzillotta
Felix Sarnthein
Gil Kur
Thomas Hofmann
Bobby He
99
0
0
17 Oct 2025
Diffusion-Assisted Distillation for Self-Supervised Graph Representation Learning with MLPs
Diffusion-Assisted Distillation for Self-Supervised Graph Representation Learning with MLPsIEEE Transactions on Artificial Intelligence (IEEE TAI), 2025
Seong Jin Ahn
Myoung-Ho Kim
222
0
0
05 Oct 2025
Lightweight and Generalizable Acoustic Scene Representations via Contrastive Fine-Tuning and Distillation
Lightweight and Generalizable Acoustic Scene Representations via Contrastive Fine-Tuning and Distillation
Kuang Yuan
Yang Gao
Xilin Li
Xinhao Mei
Syavosh Zadissa
Tarun Pruthi
Saeed Bagheri Sereshki
VLM
80
0
0
04 Oct 2025
BioX-Bridge: Model Bridging for Unsupervised Cross-Modal Knowledge Transfer across Biosignals
BioX-Bridge: Model Bridging for Unsupervised Cross-Modal Knowledge Transfer across Biosignals
Chenqi Li
Yu Liu
Timothy Denison
T. Zhu
108
0
0
02 Oct 2025
Enriching Knowledge Distillation with Intra-Class Contrastive Learning
Enriching Knowledge Distillation with Intra-Class Contrastive Learning
Hua Yuan
Ning Xu
Xin Geng
Yong Rui
96
0
0
26 Sep 2025
Global Minimizers of Sigmoid Contrastive Loss
Global Minimizers of Sigmoid Contrastive Loss
Kiril Bangachev
Guy Bresler
Iliyas Noman
Yury Polyanskiy
113
0
0
23 Sep 2025
iCD: A Implicit Clustering Distillation Mathod for Structural Information Mining
iCD: A Implicit Clustering Distillation Mathod for Structural Information Mining
Xiang Xue
Yatu Ji
Qing-dao-er-ji Ren
Bao Shi
Min Lu
Nier Wu
Xufei Zhuang
Haiteng Xu
Gan-qi-qi-ge Cha
98
0
0
16 Sep 2025
Uncertainty-Aware Retinal Vessel Segmentation via Ensemble Distillation
Uncertainty-Aware Retinal Vessel Segmentation via Ensemble Distillation
Jeremiah Fadugba
P. Manescu
Bolanle Oladejo
D. Fernández-Reyes
Philipp Berens
UQCVOODFedML
158
0
0
15 Sep 2025
Enriched text-guided variational multimodal knowledge distillation network (VMD) for automated diagnosis of plaque vulnerability in 3D carotid artery MRI
Enriched text-guided variational multimodal knowledge distillation network (VMD) for automated diagnosis of plaque vulnerability in 3D carotid artery MRI
Bo Cao
Fan Yu
Mengmeng Feng
SenHao Zhang
Xin Meng
Yue Zhang
Zhen Qian
Jie Lu
90
0
0
15 Sep 2025
Delta Activations: A Representation for Finetuned Large Language Models
Delta Activations: A Representation for Finetuned Large Language Models
Zhiqiu Xu
Amish Sethi
Mayur Naik
Ser-Nam Lim
122
0
0
04 Sep 2025
ATMS-KD: Adaptive Temperature and Mixed Sample Knowledge Distillation for a Lightweight Residual CNN in Agricultural Embedded Systems
ATMS-KD: Adaptive Temperature and Mixed Sample Knowledge Distillation for a Lightweight Residual CNN in Agricultural Embedded Systems
Mohamed Ohamouddou
Said Ohamouddou
A. E. Afia
Rafik Lasri
72
0
0
27 Aug 2025
The Role of Teacher Calibration in Knowledge Distillation
The Role of Teacher Calibration in Knowledge DistillationIEEE Access (IEEE Access), 2025
Suyoung Kim
Seonguk Park
Junhoo Lee
Nojun Kwak
56
0
0
27 Aug 2025
Parameter-Free Logit Distillation via Sorting Mechanism
Parameter-Free Logit Distillation via Sorting MechanismIEEE Signal Processing Letters (IEEE SPL), 2025
Stephen Ekaputra Limantoro
68
0
0
22 Aug 2025
Expandable Residual Approximation for Knowledge Distillation
Expandable Residual Approximation for Knowledge DistillationIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2025
Zhaoyi Yan
Binghui Chen
Yunfan Liu
Qixiang Ye
CLL
101
0
0
22 Aug 2025
Distilled-3DGS:Distilled 3D Gaussian Splatting
Distilled-3DGS:Distilled 3D Gaussian Splatting
Lintao Xiang
Xinkai Chen
Jianhuang Lai
Guangcong Wang
3DGS
101
0
0
19 Aug 2025
TopKD: Top-scaled Knowledge Distillation
TopKD: Top-scaled Knowledge Distillation
Qi Wang
Jinjia Zhou
83
0
0
06 Aug 2025
REACT-KD: Region-Aware Cross-modal Topological Knowledge Distillation for Interpretable Medical Image Classification
REACT-KD: Region-Aware Cross-modal Topological Knowledge Distillation for Interpretable Medical Image Classification
Hongzhao Chen
Hexiao Ding
Yufeng Jiang
Jing Lan
Ka Chun Li
...
Sam Ng
Chi Lai Ho
Jing Cai
Liang-ting Lin
Jung Sun Yoo
166
0
0
04 Aug 2025
$R^2$-CoD: Understanding Text-Graph Complementarity in Relational Reasoning via Knowledge Co-Distillation
R2R^2R2-CoD: Understanding Text-Graph Complementarity in Relational Reasoning via Knowledge Co-Distillation
Zhen Wu
Ritam Dutt
Luke M. Breitfeller
Armineh Nourbakhsh
Siddharth Parekh
Carolyn Rose
86
0
0
02 Aug 2025
Cross-Modal Distillation For Widely Differing Modalities
Cross-Modal Distillation For Widely Differing Modalities
Cairong Zhao
Yufeng Jin
Zifan Song
Haonan Chen
Duoqian Miao
Guosheng Hu
136
0
0
22 Jul 2025
Generative Distribution Distillation
Generative Distribution Distillation
Jiequan Cui
B. Zhu
Qingshan Xu
Xiaogang Xu
Pengguang Chen
Xiaojuan Qi
Bei Yu
Hanwang Zhang
Richang Hong
OffRL
144
0
0
19 Jul 2025
Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning
Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning
Yichen Li
Xiuying Wang
Wenchao Xu
Haozhao Wang
Yining Qi
Jiahua Dong
Ruixuan Li
FedML
183
1
0
14 Jul 2025
Consistent Supervised-Unsupervised Alignment for Generalized Category Discovery
Consistent Supervised-Unsupervised Alignment for Generalized Category Discovery
Jizhou Han
S. Wang
Yuhang He
Chenhao Ding
Qiang Wang
Xinyuan Gao
Songlin Dong
Yihong Gong
90
0
0
07 Jul 2025
GenRecal: Generation after Recalibration from Large to Small Vision-Language Models
GenRecal: Generation after Recalibration from Large to Small Vision-Language Models
Byung-Kwan Lee
Ryo Hachiuma
Yong Man Ro
Yu-Chun Wang
Yueh-Hua Wu
VLM
251
2
0
18 Jun 2025
A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge
A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge
Tarique Dahri
Zulfiqar Ali Memon
Zhenyu Yu
Mohd Yamani Idna Idris
Sheheryar Khan
Sadiq Ahmad
Maged Shoman
Saddam Aziz
Rizwan Qureshi
141
0
0
08 Jun 2025
Progressive Class-level Distillation
Progressive Class-level Distillation
JiaYan Li
Jun Li
Zhourui Zhang
Jianhua Xu
140
0
0
30 May 2025
A Closer Look at Multimodal Representation Collapse
A Closer Look at Multimodal Representation Collapse
Abhra Chaudhuri
Anjan Dutta
Tu Bui
Serban Georgescu
205
5
0
28 May 2025
Distill CLIP (DCLIP): Enhancing Image-Text Retrieval via Cross-Modal Transformer Distillation
Distill CLIP (DCLIP): Enhancing Image-Text Retrieval via Cross-Modal Transformer Distillation
Daniel Csizmadia
Andrei Codreanu
Victor Sim
Vighnesh Prabhu
Michael Lu
Kevin Zhu
Sean O'Brien
Sean O Brien
CLIPVLM
391
4
0
25 May 2025
DeepKD: A Deeply Decoupled and Denoised Knowledge Distillation Trainer
DeepKD: A Deeply Decoupled and Denoised Knowledge Distillation Trainer
Haiduo Huang
Jiangcheng Song
Yadong Zhang
Pengju Ren
271
0
0
21 May 2025
Field Matters: A lightweight LLM-enhanced Method for CTR Prediction
Field Matters: A lightweight LLM-enhanced Method for CTR Prediction
Yu Cui
Yifan Zhang
Jiawei Chen
Yudi Wu
Changwang Zhang
Jun Wang
Yuegang Sun
Xiaohu Yang
Can Wang
220
0
0
20 May 2025
Intra-class Patch Swap for Self-Distillation
Intra-class Patch Swap for Self-Distillation
Hongjun Choi
Eun Som Jeon
Ankita Shukla
Pavan Turaga
203
0
0
20 May 2025
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
Seonghak Kim
325
1
0
17 May 2025
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation
Reilly Haskins
Benjamin Adams
229
0
0
16 May 2025
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via ααα-βββ-Divergence
Guanghui Wang
Zhiyong Yang
Liang Luo
Shi Wang
Qianqian Xu
Qingming Huang
605
5
0
07 May 2025
Swapped Logit Distillation via Bi-level Teacher Alignment
Swapped Logit Distillation via Bi-level Teacher Alignment
Stephen Ekaputra Limantoro
Jhe-Hao Lin
Chih-Yu Wang
Yi-Lung Tsai
Hong-Han Shuai
Ching-Chun Huang
Wen-Huang Cheng
329
1
0
27 Apr 2025
CMCRD: Cross-Modal Contrastive Representation Distillation for Emotion Recognition
CMCRD: Cross-Modal Contrastive Representation Distillation for Emotion Recognition
Siyuan Kan
Huanyu Wu
Zhenyao Cui
Fan Huang
Xiaolong Xu
Dongrui Wu
229
0
0
12 Apr 2025
An Efficient Training Algorithm for Models with Block-wise Sparsity
An Efficient Training Algorithm for Models with Block-wise Sparsity
Ding Zhu
Zhiqun Zuo
Mohammad Mahdi Khalili
165
0
0
27 Mar 2025
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Anshumann
Mohd Abbas Zaidi
Akhil Kedia
Jinwoo Ahn
Taehwak Kwon
Kangwook Lee
Haejun Lee
Joohyung Lee
FedML
753
1
0
21 Mar 2025
Cyclic Contrastive Knowledge Transfer for Open-Vocabulary Object Detection
Cyclic Contrastive Knowledge Transfer for Open-Vocabulary Object DetectionInternational Conference on Learning Representations (ICLR), 2025
Chuhan Zhang
Chaoyang Zhu
Pingcheng Dong
Long Chen
Dong Zhang
ObjDVLM
942
3
0
14 Mar 2025
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model
Yuxuan Luo
Jiaqi Tang
Chenyi Huang
Feiyang Hao
Zhouhui Lian
VLM
230
0
0
13 Mar 2025
Asymmetric Decision-Making in Online Knowledge Distillation:Unifying Consensus and Divergence
Zhaowei Chen
Borui Zhao
Yuchen Ge
Yuhao Chen
Renjie Song
Jiajun Liang
199
0
0
09 Mar 2025
AugFL: Augmenting Federated Learning with Pretrained Models
Sheng Yue
Zerui Qin
Yongheng Deng
Ju Ren
Yaoxue Zhang
Junshan Zhang
FedML
312
4
0
04 Mar 2025
1234...121314
Next