Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.04145
Cited By
ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model
8 August 2024
Yifan Chen
Xiaozhen Qiao
Zhe Sun
Xuelong Li
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model"
4 / 4 papers shown
Title
Bidirectional Prototype-Reward co-Evolution for Test-Time Adaptation of Vision-Language Models
Xiaozhen Qiao
Peng Huang
Jiakang Yuan
Xianda Guo
Bowen Ye
Zhe Sun
Xuelong Li
55
0
0
12 Mar 2025
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
Chu Myaet Thwal
Ye Lin Tun
Minh N. H. Nguyen
Eui-nam Huh
Choong Seon Hong
VLM
72
0
0
05 Dec 2024
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Haokun Lin
Haoli Bai
Zhili Liu
Lu Hou
Muyi Sun
Linqi Song
Ying Wei
Zhenan Sun
CLIP
VLM
42
13
0
12 Mar 2024
Improving CLIP Robustness with Knowledge Distillation and Self-Training
Clement Laroudie
Andrei Bursuc
Mai Lan Ha
Gianni Franchi
VLM
20
5
0
19 Sep 2023
1