ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07080
  4. Cited By
DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement
  Representation and Adversarial Learning

DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement Representation and Adversarial Learning

5 August 2024
Dino Ienco
C. Dantas
ArXivPDFHTML

Papers citing "DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement Representation and Adversarial Learning"

2 / 2 papers shown
Title
DistilVPR: Cross-Modal Knowledge Distillation for Visual Place
  Recognition
DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition
Sijie Wang
Rui She
Qiyu Kang
Xingchao Jian
Kai Zhao
Yang Song
Wee Peng Tay
19
1
0
17 Dec 2023
Towards Counterfactual Image Manipulation via CLIP
Towards Counterfactual Image Manipulation via CLIP
Yingchen Yu
Fangneng Zhan
Rongliang Wu
Jiahui Zhang
Shijian Lu
Miaomiao Cui
Xuansong Xie
Xiansheng Hua
C. Miao
CLIP
32
29
0
06 Jul 2022
1