ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10699
  4. Cited By
Contrastive Representation Distillation
v1v2v3 (latest)

Contrastive Representation Distillation

International Conference on Learning Representations (ICLR), 2019
23 October 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
ArXiv (abs)PDFHTMLGithub (2336★)

Papers citing "Contrastive Representation Distillation"

50 / 686 papers shown
Title
Distilling the Knowledge in Data Pruning
Distilling the Knowledge in Data Pruning
Emanuel Ben-Baruch
Adam Botach
Igor Kviatkovsky
Manoj Aggarwal
Gérard Medioni
179
2
0
12 Mar 2024
Attention is all you need for boosting graph convolutional neural
  network
Attention is all you need for boosting graph convolutional neural network
Yinwei Wu
GNN
191
0
0
10 Mar 2024
Frequency Attention for Knowledge Distillation
Frequency Attention for Knowledge DistillationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Cuong Pham
Van-Anh Nguyen
Trung Le
Dinh Q. Phung
Gustavo Carneiro
Thanh-Toan Do
166
33
0
09 Mar 2024
Learning to Maximize Mutual Information for Chain-of-Thought
  Distillation
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Xin Chen
Hanxian Huang
Yanjun Gao
Yi Wang
Jishen Zhao
Ke Ding
275
26
0
05 Mar 2024
Logit Standardization in Knowledge Distillation
Logit Standardization in Knowledge Distillation
Shangquan Sun
Wenqi Ren
Jingzhi Li
Rui Wang
Xiaochun Cao
258
147
0
03 Mar 2024
Weakly Supervised Monocular 3D Detection with a Single-View Image
Weakly Supervised Monocular 3D Detection with a Single-View Image
Xue-Qiu Jiang
Sheng Jin
Lewei Lu
Xiaoqin Zhang
Shijian Lu
173
9
0
29 Feb 2024
Sinkhorn Distance Minimization for Knowledge Distillation
Sinkhorn Distance Minimization for Knowledge Distillation
Xiao Cui
Yulei Qin
Yuting Gao
Enwei Zhang
Zihan Xu
Tong Wu
Ke Li
Xing Sun
Wen-gang Zhou
Houqiang Li
161
18
0
27 Feb 2024
On Good Practices for Task-Specific Distillation of Large Pretrained
  Visual Models
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLMMQ
255
2
0
17 Feb 2024
Knowledge Distillation Based on Transformed Teacher Matching
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
325
33
0
17 Feb 2024
Graph Inference Acceleration by Learning MLPs on Graphs without
  Supervision
Graph Inference Acceleration by Learning MLPs on Graphs without Supervision
Zehong Wang
Zheyuan Zhang
Chuxu Zhang
Yanfang Ye
145
0
0
14 Feb 2024
Large Language Model Meets Graph Neural Network in Knowledge
  Distillation
Large Language Model Meets Graph Neural Network in Knowledge Distillation
Shengxiang Hu
Guobing Zou
Song Yang
Yanglan Gan
Bofeng Zhang
Yixin Chen
312
14
0
08 Feb 2024
Data-efficient Large Vision Models through Sequential Autoregression
Data-efficient Large Vision Models through Sequential Autoregression
Jianyuan Guo
Zhiwei Hao
Chengcheng Wang
Yehui Tang
Han Wu
Han Hu
Kai Han
Chang Xu
VLM
216
12
0
07 Feb 2024
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Good Teachers Explain: Explanation-Enhanced Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2024
Amin Parchami-Araghi
Moritz Bohle
Sukrut Rao
Bernt Schiele
FAtt
195
15
0
05 Feb 2024
Precise Knowledge Transfer via Flow Matching
Precise Knowledge Transfer via Flow Matching
Shitong Shao
Zhiqiang Shen
Linrui Gong
Huanran Chen
Xu Dai
245
2
0
03 Feb 2024
Iterative Data Smoothing: Mitigating Reward Overfitting and
  Overoptimization in RLHF
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu
Michael I. Jordan
Jiantao Jiao
193
42
0
29 Jan 2024
Rethinking Centered Kernel Alignment in Knowledge Distillation
Rethinking Centered Kernel Alignment in Knowledge DistillationInternational Joint Conference on Artificial Intelligence (IJCAI), 2024
Zikai Zhou
Chunjiang Ge
Shitong Shao
Linrui Gong
Shaohui Lin
354
12
0
22 Jan 2024
Bayes Conditional Distribution Estimation for Knowledge Distillation
  Based on Conditional Mutual Information
Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual InformationInternational Conference on Learning Representations (ICLR), 2024
Linfeng Ye
Shayan Mohajer Hamidi
Renhao Tan
En-Hui Yang
VLM
249
21
0
16 Jan 2024
Source-Free Cross-Modal Knowledge Transfer by Unleashing the Potential
  of Task-Irrelevant Data
Source-Free Cross-Modal Knowledge Transfer by Unleashing the Potential of Task-Irrelevant DataIEEE Transactions on Image Processing (TIP), 2024
Jinjin Zhu
Yucheng Chen
Lin Wang
222
4
0
10 Jan 2024
Dual Teacher Knowledge Distillation with Domain Alignment for Face
  Anti-spoofing
Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing
Zhe Kong
Wentian Zhang
Tao Wang
Kaihao Zhang
Yuexiang Li
Xiaoying Tang
Tong Lu
AAMLCVBM
159
2
0
02 Jan 2024
MIM4DD: Mutual Information Maximization for Dataset Distillation
MIM4DD: Mutual Information Maximization for Dataset Distillation
Yuzhang Shang
Zhihang Yuan
Yan Yan
DD
242
21
0
27 Dec 2023
ShiftKD: Benchmarking Knowledge Distillation under Distribution Shift
ShiftKD: Benchmarking Knowledge Distillation under Distribution Shift
Songming Zhang
Ziyu Lyu
Ziyu Lyu
Xiaofeng Chen
167
2
0
25 Dec 2023
Segment Any Events via Weighted Adaptation of Pivotal Tokens
Segment Any Events via Weighted Adaptation of Pivotal Tokens
Zhiwen Chen
Zhiyu Zhu
Yifan Zhang
Xianqiang Lyu
Guangming Shi
Jinjian Wu
209
9
0
24 Dec 2023
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge
  Distillation
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Chengming Hu
Haolun Wu
Xuan Li
Chen Ma
Xi Chen
Jun Yan
Boyu Wang
Xue Liu
319
3
0
22 Dec 2023
Let All be Whitened: Multi-teacher Distillation for Efficient Visual
  Retrieval
Let All be Whitened: Multi-teacher Distillation for Efficient Visual RetrievalAAAI Conference on Artificial Intelligence (AAAI), 2023
Zhe Ma
Jianfeng Dong
R. Beyah
Zhenguang Liu
Xuhong Zhang
Zonghui Wang
Sifeng He
Feng Qian
Xiaobo Zhang
Lei Yang
207
8
0
15 Dec 2023
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
Yi Guo
Yiqian He
Xiaoyang Li
Haotong Qin
Van Tung Pham
Yang Zhang
Shouda Liu
232
1
0
14 Dec 2023
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient
  Semantic Segmentation
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation
Jiawei Fan
Chao Li
Xiaolong Liu
Meina Song
Anbang Yao
185
10
0
07 Dec 2023
Contrastive Learning-Based Spectral Knowledge Distillation for
  Multi-Modality and Missing Modality Scenarios in Semantic Segmentation
Contrastive Learning-Based Spectral Knowledge Distillation for Multi-Modality and Missing Modality Scenarios in Semantic Segmentation
Aniruddh Sikdar
Jayant Teotia
Suresh Sundaram
282
4
0
04 Dec 2023
Initializing Models with Larger Ones
Initializing Models with Larger OnesInternational Conference on Learning Representations (ICLR), 2023
Zhiqiu Xu
Yanjie Chen
Kirill Vishniakov
Yida Yin
Zhiqiang Shen
Trevor Darrell
Lingjie Liu
Zhuang Liu
221
28
0
30 Nov 2023
Knowledge Transfer from Vision Foundation Models for Efficient Training
  of Small Task-specific Models
Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific ModelsInternational Conference on Machine Learning (ICML), 2023
Raviteja Vemulapalli
Hadi Pouransari
Fartash Faghri
Sachin Mehta
Mehrdad Farajtabar
Mohammad Rastegari
Oncel Tuzel
357
13
0
30 Nov 2023
Topology-Preserving Adversarial Training
Topology-Preserving Adversarial Training
Xiaoyue Mi
Fan Tang
Yepeng Weng
Danding Wang
Juan Cao
Sheng Tang
Peng Li
Yang Liu
231
1
0
29 Nov 2023
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and
  200+ FPS
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPSNeural Information Processing Systems (NeurIPS), 2023
Zhiwen Fan
Kevin Wang
Kairun Wen
Zehao Zhu
Dejia Xu
Zinan Lin
3DGS
662
382
0
28 Nov 2023
Cosine Similarity Knowledge Distillation for Individual Class
  Information Transfer
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer
Gyeongdo Ham
Seonghak Kim
Suin Lee
Jae-Hyeok Lee
Daeshik Kim
130
9
0
24 Nov 2023
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Maximizing Discrimination Capability of Knowledge Distillation with Energy FunctionKnowledge-Based Systems (KBS), 2023
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
525
9
0
24 Nov 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network PruningIEEE Transactions on Knowledge and Data Engineering (TKDE), 2023
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
423
6
0
23 Nov 2023
Semi-supervised ViT knowledge distillation network with style transfer
  normalization for colorectal liver metastases survival prediction
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction
Mohamed El Amine Elforaici
E. Montagnon
Francisco Perdigon Romero
W. Le
F. Azzi
Dominique Trudel
Bich Nguyen
Simon Turcotte
An Tang
Samuel Kadoury
MedIm
226
4
0
17 Nov 2023
Lite it fly: An All-Deformable-Butterfly Network
Lite it fly: An All-Deformable-Butterfly NetworkIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Rui Lin
Jason Chun Lok Li
Jiajun Zhou
Binxiao Huang
Jie Ran
Ngai Wong
158
1
0
14 Nov 2023
Teach me with a Whisper: Enhancing Large Language Models for Analyzing
  Spoken Transcripts using Speech Embeddings
Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Fatema Hasan
Yulong Li
James R. Foulds
Shimei Pan
Bishwaranjan Bhattacharjee
283
2
0
13 Nov 2023
Text Representation Distillation via Information Bottleneck Principle
Text Representation Distillation via Information Bottleneck Principle
Yanzhao Zhang
Dingkun Long
Zehan Li
Pengjun Xie
228
5
0
09 Nov 2023
Self-Supervised Learning of Representations for Space Generates
  Multi-Modular Grid Cells
Self-Supervised Learning of Representations for Space Generates Multi-Modular Grid CellsNeural Information Processing Systems (NeurIPS), 2023
Rylan Schaeffer
Mikail Khona
Tzuhsuan Ma
Cristobal Eyzaguirre
Sanmi Koyejo
Ila Rani Fiete
SSL
158
31
0
04 Nov 2023
Comparative Knowledge Distillation
Comparative Knowledge DistillationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Alex Wilf
Alex Tianyi Xu
Paul Pu Liang
A. Obolenskiy
Daniel Fried
Louis-Philippe Morency
VLM
149
3
0
03 Nov 2023
Distilling Out-of-Distribution Robustness from Vision-Language
  Foundation Models
Distilling Out-of-Distribution Robustness from Vision-Language Foundation ModelsNeural Information Processing Systems (NeurIPS), 2023
Andy Zhou
Jindong Wang
Yu-Xiong Wang
Haohan Wang
VLM
208
8
0
02 Nov 2023
Group Distributionally Robust Knowledge Distillation
Group Distributionally Robust Knowledge Distillation
Konstantinos Vilouras
Xiao Liu
Pedro Sanchez
Alison Q. OÑeil
Sotirios A. Tsaftaris
OOD
161
4
0
01 Nov 2023
One-for-All: Bridge the Gap Between Heterogeneous Architectures in
  Knowledge Distillation
One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge DistillationNeural Information Processing Systems (NeurIPS), 2023
Zhiwei Hao
Jianyuan Guo
Kai Han
Yehui Tang
Han Hu
Yunhe Wang
Chang Xu
284
122
0
30 Oct 2023
Fantastic Gains and Where to Find Them: On the Existence and Prospect of
  General Knowledge Transfer between Any Pretrained Model
Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained ModelInternational Conference on Learning Representations (ICLR), 2023
Karsten Roth
Lukas Thede
Almut Sophia Koepke
Oriol Vinyals
Olivier J. Hénaff
Zeynep Akata
AAML
289
17
0
26 Oct 2023
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free
  Deep Learning Studies: A Case Study on NLP
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara
VLM
207
1
0
26 Oct 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
252
4
0
26 Oct 2023
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL
  Shader Images
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank
Jim Davis
201
2
0
20 Oct 2023
Learning from Rich Semantics and Coarse Locations for Long-tailed Object
  Detection
Learning from Rich Semantics and Coarse Locations for Long-tailed Object DetectionNeural Information Processing Systems (NeurIPS), 2023
Lingchen Meng
Xiyang Dai
Jianwei Yang
Dongdong Chen
Yinpeng Chen
Xiyang Dai
Yi-Ling Chen
Zuxuan Wu
Lu Yuan
Yu-Gang Jiang
133
12
0
18 Oct 2023
Getting aligned on representational alignment
Getting aligned on representational alignment
Ilia Sucholutsky
Lukas Muttenthaler
Adrian Weller
Andi Peng
Andreea Bobu
...
Thomas Unterthiner
Andrew Kyle Lampinen
Klaus-Robert Muller
M. Toneva
Thomas Griffiths
278
131
0
18 Oct 2023
Exploiting User Comments for Early Detection of Fake News Prior to
  Users' Commenting
Exploiting User Comments for Early Detection of Fake News Prior to Users' Commenting
Qiong Nan
Qiang Sheng
Juan Cao
Yongchun Zhu
Danding Wang
Guang Yang
Jintao Li
Kai Shu
268
12
0
16 Oct 2023
Previous
12345...121314
Next