Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.10699
Cited By
Contrastive Representation Distillation
23 October 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Contrastive Representation Distillation"
50 / 611 papers shown
Title
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
40
0
0
21 Mar 2024
Scale Decoupled Distillation
Shicai Wei
39
4
0
20 Mar 2024
HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation
Sha Zhang
Jiajun Deng
Lei Bai
Houqiang Li
Wanli Ouyang
Yanyong Zhang
3DPC
48
8
0
18 Mar 2024
Don't Judge by the Look: Towards Motion Coherent Video Representation
Yitian Zhang
Yue Bai
Huan Wang
Yizhou Wang
Yun Fu
33
0
0
14 Mar 2024
Distilling the Knowledge in Data Pruning
Emanuel Ben-Baruch
Adam Botach
Igor Kviatkovsky
Manoj Aggarwal
Gérard Medioni
35
1
0
12 Mar 2024
Attention is all you need for boosting graph convolutional neural network
Yinwei Wu
GNN
23
0
0
10 Mar 2024
Frequency Attention for Knowledge Distillation
Cuong Pham
Van-Anh Nguyen
Trung Le
Dinh Q. Phung
Gustavo Carneiro
Thanh-Toan Do
19
16
0
09 Mar 2024
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Xin Chen
Hanxian Huang
Yanjun Gao
Yi Wang
Jishen Zhao
Ke Ding
35
11
0
05 Mar 2024
Logit Standardization in Knowledge Distillation
Shangquan Sun
Wenqi Ren
Jingzhi Li
Rui Wang
Xiaochun Cao
32
55
0
03 Mar 2024
Weakly Supervised Monocular 3D Detection with a Single-View Image
Xue-Qiu Jiang
Sheng Jin
Lewei Lu
Xiaoqin Zhang
Shijian Lu
56
6
0
29 Feb 2024
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLM
MQ
40
1
0
17 Feb 2024
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
24
19
0
17 Feb 2024
Graph Inference Acceleration by Learning MLPs on Graphs without Supervision
Zehong Wang
Zheyuan Zhang
Chuxu Zhang
Yanfang Ye
20
0
0
14 Feb 2024
Large Language Model Meets Graph Neural Network in Knowledge Distillation
Shengxiang Hu
Guobing Zou
Song Yang
Yanglan Gan
Bofeng Zhang
Yixin Chen
40
7
0
08 Feb 2024
Data-efficient Large Vision Models through Sequential Autoregression
Jianyuan Guo
Zhiwei Hao
Chengcheng Wang
Yehui Tang
Han Wu
Han Hu
Kai Han
Chang Xu
VLM
36
10
0
07 Feb 2024
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami-Araghi
Moritz Bohle
Sukrut Rao
Bernt Schiele
FAtt
8
3
0
05 Feb 2024
Precise Knowledge Transfer via Flow Matching
Shitong Shao
Zhiqiang Shen
Linrui Gong
Huanran Chen
Xu Dai
24
2
0
03 Feb 2024
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu
Michael I. Jordan
Jiantao Jiao
26
23
0
29 Jan 2024
Rethinking Centered Kernel Alignment in Knowledge Distillation
Zikai Zhou
Yunhang Shen
Shitong Shao
Linrui Gong
Shaohui Lin
24
1
0
22 Jan 2024
Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information
Linfeng Ye
Shayan Mohajer Hamidi
Renhao Tan
En-Hui Yang
VLM
27
12
0
16 Jan 2024
Source-Free Cross-Modal Knowledge Transfer by Unleashing the Potential of Task-Irrelevant Data
Jinjin Zhu
Yucheng Chen
Lin Wang
25
2
0
10 Jan 2024
Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing
Zhe Kong
Wentian Zhang
Tao Wang
Kaihao Zhang
Yuexiang Li
Xiaoying Tang
Wenhan Luo
AAML
CVBM
25
1
0
02 Jan 2024
MIM4DD: Mutual Information Maximization for Dataset Distillation
Yuzhang Shang
Zhihang Yuan
Yan Yan
DD
27
13
0
27 Dec 2023
Revisiting Knowledge Distillation under Distribution Shift
Songming Zhang
Ziyu Lyu
Xiaofeng Chen
24
1
0
25 Dec 2023
Segment Any Events via Weighted Adaptation of Pivotal Tokens
Zhiwen Chen
Zhiyu Zhu
Yifan Zhang
Junhui Hou
Guangming Shi
Jinjian Wu
31
6
0
24 Dec 2023
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Chengming Hu
Haolun Wu
Xuan Li
Chen-li Ma
Xi Chen
Jun Yan
Boyu Wang
Xue Liu
27
3
0
22 Dec 2023
Let All be Whitened: Multi-teacher Distillation for Efficient Visual Retrieval
Zhe Ma
Jianfeng Dong
Shouling Ji
Zhenguang Liu
Xuhong Zhang
Zonghui Wang
Sifeng He
Feng Qian
Xiaobo Zhang
Lei Yang
33
6
0
15 Dec 2023
RdimKD: Generic Distillation Paradigm by Dimensionality Reduction
Yi Guo
Yiqian He
Xiaoyang Li
Haotong Qin
Van Tung Pham
Yang Zhang
Shouda Liu
37
1
0
14 Dec 2023
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation
Jiawei Fan
Chao Li
Xiaolong Liu
Meina Song
Anbang Yao
17
5
0
07 Dec 2023
Contrastive Learning-Based Spectral Knowledge Distillation for Multi-Modality and Missing Modality Scenarios in Semantic Segmentation
Aniruddh Sikdar
Jayant Teotia
Suresh Sundaram
27
2
0
04 Dec 2023
Initializing Models with Larger Ones
Zhiqiu Xu
Yanjie Chen
Kirill Vishniakov
Yida Yin
Zhiqiang Shen
Trevor Darrell
Lingjie Liu
Zhuang Liu
28
17
0
30 Nov 2023
Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models
Raviteja Vemulapalli
Hadi Pouransari
Fartash Faghri
Sachin Mehta
Mehrdad Farajtabar
Mohammad Rastegari
Oncel Tuzel
35
7
0
30 Nov 2023
Topology-Preserving Adversarial Training
Xiaoyue Mi
Fan Tang
Yepeng Weng
Danding Wang
Juan Cao
Sheng Tang
Peng Li
Yang Liu
49
1
0
29 Nov 2023
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS
Zhiwen Fan
Kevin Wang
Kairun Wen
Zehao Zhu
Dejia Xu
Zhangyang Wang
3DGS
26
183
0
28 Nov 2023
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
26
2
0
24 Nov 2023
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer
Gyeongdo Ham
Seonghak Kim
Suin Lee
Jae-Hyeok Lee
Daeshik Kim
19
5
0
24 Nov 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
22
2
0
23 Nov 2023
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction
Mohamed El Amine Elforaici
E. Montagnon
Francisco Perdigon Romero
W. Le
F. Azzi
Dominique Trudel
Bich Nguyen
Simon Turcotte
An Tang
Samuel Kadoury
MedIm
25
2
0
17 Nov 2023
Lite it fly: An All-Deformable-Butterfly Network
Rui Lin
Jason Chun Lok Li
Jiajun Zhou
Binxiao Huang
Jie Ran
Ngai Wong
37
1
0
14 Nov 2023
Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Fatema Hasan
Yulong Li
James R. Foulds
Shimei Pan
Bishwaranjan Bhattacharjee
31
2
0
13 Nov 2023
Text Representation Distillation via Information Bottleneck Principle
Yanzhao Zhang
Dingkun Long
Zehan Li
Pengjun Xie
11
2
0
09 Nov 2023
Self-Supervised Learning of Representations for Space Generates Multi-Modular Grid Cells
Rylan Schaeffer
Mikail Khona
Tzuhsuan Ma
Cristobal Eyzaguirre
Sanmi Koyejo
Ila Rani Fiete
SSL
27
20
0
04 Nov 2023
Comparative Knowledge Distillation
Alex Wilf
Alex Tianyi Xu
Paul Pu Liang
A. Obolenskiy
Daniel Fried
Louis-Philippe Morency
VLM
18
1
0
03 Nov 2023
Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models
Andy Zhou
Jindong Wang
Yu-xiong Wang
Haohan Wang
VLM
41
6
0
02 Nov 2023
Group Distributionally Robust Knowledge Distillation
Konstantinos Vilouras
Xiao Liu
Pedro Sanchez
Alison Q. OÑeil
Sotirios A. Tsaftaris
OOD
13
2
0
01 Nov 2023
One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation
Zhiwei Hao
Jianyuan Guo
Kai Han
Yehui Tang
Han Hu
Yunhe Wang
Chang Xu
36
58
0
30 Oct 2023
Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model
Karsten Roth
Lukas Thede
Almut Sophia Koepke
Oriol Vinyals
Olivier J. Hénaff
Zeynep Akata
AAML
17
11
0
26 Oct 2023
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara
VLM
10
1
0
26 Oct 2023
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
21
0
0
26 Oct 2023
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank
Jim Davis
25
1
0
20 Oct 2023
Previous
1
2
3
4
5
6
...
11
12
13
Next