ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.04145
  4. Cited By
ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive
  Language-Image Pre-traning Model

ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model

8 August 2024
Yifan Chen
Xiaozhen Qiao
Zhe Sun
Xuelong Li
    VLM
ArXivPDFHTML

Papers citing "ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model"

4 / 4 papers shown
Title
Bidirectional Prototype-Reward co-Evolution for Test-Time Adaptation of Vision-Language Models
Xiaozhen Qiao
Peng Huang
Jiakang Yuan
Xianda Guo
Bowen Ye
Zhe Sun
Xuelong Li
55
0
0
12 Mar 2025
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
Chu Myaet Thwal
Ye Lin Tun
Minh N. H. Nguyen
Eui-nam Huh
Choong Seon Hong
VLM
72
0
0
05 Dec 2024
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with
  Module-wise Pruning Error Metric
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Haokun Lin
Haoli Bai
Zhili Liu
Lu Hou
Muyi Sun
Linqi Song
Ying Wei
Zhenan Sun
CLIP
VLM
42
13
0
12 Mar 2024
Improving CLIP Robustness with Knowledge Distillation and Self-Training
Improving CLIP Robustness with Knowledge Distillation and Self-Training
Clement Laroudie
Andrei Bursuc
Mai Lan Ha
Gianni Franchi
VLM
20
5
0
19 Sep 2023
1