Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.04145
Cited By
ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model
8 August 2024
Yifan Chen
Xiaozhen Qiao
Zhe Sun
Xuelong Li
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model"
5 / 5 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via
D
\mathbf{\texttt{D}}
D
ual-
H
\mathbf{\texttt{H}}
H
ead
O
\mathbf{\texttt{O}}
O
ptimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
29
0
0
12 May 2025
Bidirectional Prototype-Reward co-Evolution for Test-Time Adaptation of Vision-Language Models
Xiaozhen Qiao
Peng Huang
Jiakang Yuan
Xianda Guo
Bowen Ye
Zhe Sun
Xuelong Li
57
0
0
12 Mar 2025
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
Chu Myaet Thwal
Ye Lin Tun
Minh N. H. Nguyen
Eui-nam Huh
Choong Seon Hong
VLM
72
0
0
05 Dec 2024
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Haokun Lin
Haoli Bai
Zhili Liu
Lu Hou
Muyi Sun
Linqi Song
Ying Wei
Zhenan Sun
CLIP
VLM
44
13
0
12 Mar 2024
Improving CLIP Robustness with Knowledge Distillation and Self-Training
Clement Laroudie
Andrei Bursuc
Mai Lan Ha
Gianni Franchi
VLM
22
5
0
19 Sep 2023
1