ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10699
  4. Cited By
Contrastive Representation Distillation
v1v2v3 (latest)

Contrastive Representation Distillation

International Conference on Learning Representations (ICLR), 2019
23 October 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
ArXiv (abs)PDFHTMLGithub (2336★)

Papers citing "Contrastive Representation Distillation"

50 / 686 papers shown
Title
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
AAML
350
7
0
13 Jul 2024
A Survey on Symbolic Knowledge Distillation of Large Language Models
A Survey on Symbolic Knowledge Distillation of Large Language Models
Kamal Acharya
Alvaro Velasquez
Haoze Song
SyDa
196
19
0
12 Jul 2024
Reprogramming Distillation for Medical Foundation Models
Reprogramming Distillation for Medical Foundation Models
Yuhang Zhou
Siyuan Du
Haolin Li
Jiangchao Yao
Ya Zhang
Yanfeng Wang
213
3
0
09 Jul 2024
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han
Qifan Wang
S. Dianat
Majid Rabbani
Raghuveer M. Rao
Yi Fang
Qiang Guan
Lifu Huang
Dongfang Liu
VLM
179
11
0
05 Jul 2024
Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer based Dim Object Detection
Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer based Dim Object Detection
Zixing Li
Chao Yan
Zhen Lan
Xiaojia Xiang
Han Zhou
Jun Lai
Dengqing Tang
235
2
0
02 Jul 2024
Instance Temperature Knowledge Distillation
Instance Temperature Knowledge Distillation
Zitao Gao
Yuxi Zhou
Jia Gong
Jun Liu
Zhigang Tu
367
4
0
27 Jun 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge
  Distillation
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
302
0
0
25 Jun 2024
Lightweight Model Pre-training via Language Guided Knowledge
  Distillation
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
180
4
0
17 Jun 2024
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Prashanth Vijayaraghavan
Hongzhi Wang
Luyao Shi
Tyler Baldwin
David Beymer
Ehsan Degan
184
3
0
16 Jun 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset DistillationNeural Information Processing Systems (NeurIPS), 2024
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
406
23
0
15 Jun 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
190
0
0
12 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
290
3
0
12 Jun 2024
Heterogeneous Learning Rate Scheduling for Neural Architecture Search on
  Long-Tailed Datasets
Heterogeneous Learning Rate Scheduling for Neural Architecture Search on Long-Tailed Datasets
Chenxia Tang
175
1
0
11 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
553
1
0
06 Jun 2024
PLaD: Preference-based Large Language Model Distillation with
  Pseudo-Preference Pairs
PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Haorui Wang
Zhen Qin
Feng Han
Jialu Liu
Simon Baumgartner
Michael Bendersky
Chao Zhang
149
13
0
05 Jun 2024
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Jash Dalvi
Ali Dabouei
Gunjan Dhanuka
Min Xu
267
3
0
05 Jun 2024
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Tianren Zhang
Chujie Zhao
Guanyu Chen
Yizhou Jiang
Feng Chen
OODMLTOODD
368
9
0
05 Jun 2024
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Erik Landolsi
Fredrik Kahl
DiffM
312
1
0
05 Jun 2024
Estimating Human Poses Across Datasets: A Unified Skeleton and
  Multi-Teacher Distillation Approach
Estimating Human Poses Across Datasets: A Unified Skeleton and Multi-Teacher Distillation Approach
Muhammad Gul Zain Ali Khan
Dhavalkumar Limbachiya
Didier Stricker
Muhammad Zeshan Afzal
3DH
276
0
0
30 May 2024
Dual sparse training framework: inducing activation map sparsity via
  Transformed $\ell1$ regularization
Dual sparse training framework: inducing activation map sparsity via Transformed ℓ1\ell1ℓ1 regularization
Xiaolong Yu
Cong Tian
156
3
0
30 May 2024
Aligning in a Compact Space: Contrastive Knowledge Distillation between
  Heterogeneous Architectures
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures
Hongjun Wu
Li Xiao
Xingkuo Zhang
Yining Miao
260
2
0
28 May 2024
Retro: Reusing teacher projection head for efficient embedding
  distillation on Lightweight Models via Self-supervised Learning
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen
Chae Jung Park
154
0
0
24 May 2024
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch
Wen-Shu Fan
Xin-Chun Li
Bowen Tao
254
2
0
21 May 2024
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field
  Video Reconstruction
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction
Aryan Garg
Raghav Mallampali
Akshat Joshi
Shrisudhan Govindarajan
Kaushik Mitra
243
1
0
20 May 2024
Cross-Domain Knowledge Distillation for Low-Resolution Human Pose
  Estimation
Cross-Domain Knowledge Distillation for Low-Resolution Human Pose Estimation
Zejun Gu
Zhongming Zhao
Henghui Ding
Hao Shen
Zhao Zhang
De-Shuang Huang
200
0
0
19 May 2024
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model
  Stealing
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model StealingComputer Vision and Pattern Recognition (CVPR), 2024
Yunlong Zhao
Xiaoheng Deng
Yijing Liu
Xin-jun Pei
Jiazhi Xia
Wei Chen
AAML
156
4
0
18 May 2024
Open-Vocabulary Object Detection via Neighboring Region Attention
  Alignment
Open-Vocabulary Object Detection via Neighboring Region Attention AlignmentEngineering applications of artificial intelligence (EAAI), 2024
Sunyuan Qiang
Xianfei Li
Yanyan Liang
Wenlong Liao
Tao He
Pai Peng
ObjD
173
0
0
14 May 2024
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via
  Channels Relational Graph
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph
Zhiwei Wang
Jun Huang
Longhua Ma
Chengyu Wu
Hongyu Ma
240
0
0
14 May 2024
Self-Distillation Improves DNA Sequence Inference
Self-Distillation Improves DNA Sequence Inference
Tong Yu
Lei Cheng
Ruslan Khalitov
Erland Brandser Olsson
Zhirong Yang
SyDa
156
1
0
14 May 2024
Navigating the Future of Federated Recommendation Systems with Foundation Models
Navigating the Future of Federated Recommendation Systems with Foundation Models
Zhiwei Li
Guodong Long
Chunxu Zhang
Honglei Zhang
Jing Jiang
Chengqi Zhang
713
2
0
12 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural NetworksIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Ruibing Jin
Xiaoli Li
263
7
0
09 May 2024
DVMSR: Distillated Vision Mamba for Efficient Super-Resolution
DVMSR: Distillated Vision Mamba for Efficient Super-Resolution
Xiaoyan Lei
Wenlong Zhang
Weifeng Cao
314
30
0
05 May 2024
Low-Rank Knowledge Decomposition for Medical Foundation Models
Low-Rank Knowledge Decomposition for Medical Foundation Models
Yuhang Zhou
Haolin Li
Siyuan Du
Jiangchao Yao
Ya Zhang
Yanfeng Wang
156
3
0
26 Apr 2024
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment
  Analysis with Incomplete Modalities
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities
Mingcheng Li
Dingkang Yang
Xiao Zhao
Shuai Wang
Yan Wang
Kun Yang
Mingyang Sun
Dongliang Kou
Ziyun Qian
Lihua Zhang
181
31
0
25 Apr 2024
CNN2GNN: How to Bridge CNN with GNN
CNN2GNN: How to Bridge CNN with GNN
Ziheng Jiao
Hongyuan Zhang
Xuelong Li
140
18
0
23 Apr 2024
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language
  Learning
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Yanhui Guo
Shaoyuan Xu
Jinmiao Fu
Jia-Wei Liu
Chaosheng Dong
Bryan Wang
VLMCLL
154
10
0
22 Apr 2024
Heterogeneous Face Recognition Using Domain Invariant Units
Heterogeneous Face Recognition Using Domain Invariant Units
Anjith George
S´ebastien Marcel
CVBM
140
6
0
22 Apr 2024
A Multimodal Feature Distillation with CNN-Transformer Network for Brain
  Tumor Segmentation with Incomplete Modalities
A Multimodal Feature Distillation with CNN-Transformer Network for Brain Tumor Segmentation with Incomplete Modalities
Ming Kang
F. F. Ting
Raphaël C.-W. Phan
Zongyuan Ge
Chee-Ming Ting
185
7
0
22 Apr 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
308
3
0
22 Apr 2024
Dynamic Temperature Knowledge Distillation
Dynamic Temperature Knowledge Distillation
Yukang Wei
Yu Bai
207
11
0
19 Apr 2024
An Experimental Study on Exploring Strong Lightweight Vision
  Transformers via Masked Image Modeling Pre-Training
An Experimental Study on Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-Training
Jin Gao
Shubo Lin
Shaoru Wang
Yutong Kou
Zeming Li
Liang Li
Congxuan Zhang
Xiaoqin Zhang
Yizheng Wang
Weiming Hu
228
5
0
18 Apr 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
216
6
0
18 Apr 2024
Dynamic Self-adaptive Multiscale Distillation from Pre-trained
  Multimodal Large Model for Efficient Cross-modal Representation Learning
Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning
Zhengyang Liang
Meiyu Liang
Wei Huang
Yawen Li
Zhe Xue
206
1
0
16 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
216
0
0
04 Apr 2024
Improve Knowledge Distillation via Label Revision and Data Selection
Improve Knowledge Distillation via Label Revision and Data SelectionIEEE Transactions on Cognitive and Developmental Systems (IEEE TCDS), 2024
Weichao Lan
Yiu-ming Cheung
Qing Xu
Buhua Liu
Zhikai Hu
Mengke Li
Zhenghua Chen
179
6
0
03 Apr 2024
Federated Distillation: A Survey
Federated Distillation: A Survey
Lin Li
Jianping Gou
Baosheng Yu
Lan Du
Zhang Yiand Dacheng Tao
DDFedML
246
18
0
02 Apr 2024
Learning to Project for Cross-Task Knowledge Distillation
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
181
0
0
21 Mar 2024
Scale Decoupled Distillation
Scale Decoupled Distillation
Shicai Wei
198
16
0
20 Mar 2024
HVDistill: Transferring Knowledge from Images to Point Clouds via
  Unsupervised Hybrid-View Distillation
HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View DistillationInternational Journal of Computer Vision (IJCV), 2024
Sha Zhang
Jiajun Deng
Mengwei He
Houqiang Li
Wanli Ouyang
Yanyong Zhang
3DPC
171
12
0
18 Mar 2024
Don't Judge by the Look: Towards Motion Coherent Video Representation
Don't Judge by the Look: Towards Motion Coherent Video RepresentationInternational Conference on Learning Representations (ICLR), 2024
Yitian Zhang
Yue Bai
Huan Wang
Yizhou Wang
Yun Fu
195
3
0
14 Mar 2024
Previous
123456...121314
Next