ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.01441
  4. Cited By
Distilling Out-of-Distribution Robustness from Vision-Language
  Foundation Models

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

2 November 2023
Andy Zhou
Jindong Wang
Yu-xiong Wang
Haohan Wang
    VLM
ArXivPDFHTML

Papers citing "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"

4 / 4 papers shown
Title
Feature Separation and Recalibration for Adversarial Robustness
Feature Separation and Recalibration for Adversarial Robustness
Woo Jae Kim
Y. Cho
Junsik Jung
Sung-eui Yoon
AAML
31
18
0
24 Mar 2023
Enhance the Visual Representation via Discrete Adversarial Training
Enhance the Visual Representation via Discrete Adversarial Training
Xiaofeng Mao
YueFeng Chen
Ranjie Duan
Yao Zhu
Gege Qi
Shaokai Ye
Xiaodan Li
Rong Zhang
Hui Xue
37
31
0
16 Sep 2022
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Huan Wang
Yijun Li
Yuehai Wang
Haoji Hu
Ming-Hsuan Yang
107
98
0
18 Mar 2020
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
262
10,320
0
12 Dec 2018
1